101
|
Yu D, Chai A, Chung STL. Orientation information in encoding facial expressions. Vision Res 2018; 150:29-37. [PMID: 30048659 PMCID: PMC6139277 DOI: 10.1016/j.visres.2018.07.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 06/25/2018] [Accepted: 07/12/2018] [Indexed: 11/23/2022]
Abstract
Previous research showed that we use different regions of a face to categorize different facial expressions, e.g. mouth region for identifying happy faces; eyebrows, eyes and upper part of nose for identifying angry faces. These findings imply that the spatial information along or close to the horizontal orientation might be more useful than others for facial expression recognition. In this study, we examined how the performance for recognizing facial expression depends on the spatial information along different orientations, and whether the pixel-level differences in the face images could account for subjects' performance. Four facial expressions-angry, fearful, happy and sad-were tested. An orientation filter (bandwidth = 23°) was applied to restrict information within the face images, with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Accuracy for recognizing facial expression was measured for an unfiltered and the six filtered conditions. For all four facial expressions, recognition performance (normalized d') was virtually identical for filter orientations of -30°, horizontal and 30°, and declined systematically as the filter orientation approached vertical. The information contained in mouth and eye regions is a significant predictor for subject's response (based on the confusion patterns). We conclude that young adults with normal vision categorizes facial expression most effectively based on the spatial information around the horizontal orientation which captures primary changes of facial features across expressions. Across all spatial orientations, the information contained in mouth and eye regions contributes significantly to facial expression categorization.
Collapse
|
102
|
de Klerk CCJM, Hamilton AFDC, Southgate V. Eye contact modulates facial mimicry in 4-month-old infants: An EMG and fNIRS study. Cortex 2018; 106:93-103. [PMID: 29890487 PMCID: PMC6143479 DOI: 10.1016/j.cortex.2018.05.002] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 01/18/2018] [Accepted: 05/03/2018] [Indexed: 12/03/2022]
Abstract
Mimicry, the tendency to spontaneously and unconsciously copy others' behaviour, plays an important role in social interactions. It facilitates rapport between strangers, and is flexibly modulated by social signals, such as eye contact. However, little is known about the development of this phenomenon in infancy, and it is unknown whether mimicry is modulated by social signals from early in life. Here we addressed this question by presenting 4-month-old infants with videos of models performing facial actions (e.g., mouth opening, eyebrow raising) and hand actions (e.g., hand opening and closing, finger actions) accompanied by direct or averted gaze, while we measured their facial and hand muscle responses using electromyography to obtain an index of mimicry (Experiment 1). In Experiment 2 the infants observed the same stimuli while we used functional near-infrared spectroscopy to investigate the brain regions involved in modulating mimicry by eye contact. We found that 4-month-olds only showed evidence of mimicry when they observed facial actions accompanied by direct gaze. Experiment 2 suggests that this selective facial mimicry may have been associated with activation over posterior superior temporal sulcus. These findings provide the first demonstration of modulation of mimicry by social signals in young human infants, and suggest that mimicry plays an important role in social interactions from early in life.
Collapse
|
103
|
Martin JG, Davis CE, Riesenhuber M, Thorpe SJ. Zapping 500 faces in less than 100 seconds: Evidence for extremely fast and sustained continuous visual search. Sci Rep 2018; 8:12482. [PMID: 30127454 PMCID: PMC6102288 DOI: 10.1038/s41598-018-30245-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 07/25/2018] [Indexed: 11/11/2022] Open
Abstract
A number of studies have shown human subjects' impressive ability to detect faces in individual images, with saccade reaction times starting as fast as 100 ms after stimulus onset. Here, we report evidence that humans can rapidly and continuously saccade towards single faces embedded in different scenes at rates approaching 6 faces/scenes each second (including blinks and eye movement times). These observations are impressive, given that humans usually make no more than 2 to 5 saccades per second when searching a single scene with eye movements. Surprisingly, attempts to hide the faces by blending them into a large background scene had little effect on targeting rates, saccade reaction times, or targeting accuracy. Upright faces were found more quickly and more accurately than inverted faces; both with and without a cluttered background scene, and over a large range of eccentricities (4°-16°). The fastest subject in our study made continuous saccades to 500 small 3° upright faces at 4° eccentricities in only 96 seconds. The maximum face targeting rate ever achieved by any subject during any sequence of 7 faces during Experiment 3 for the no scene and upright face condition was 6.5 faces targeted/second. Our data provide evidence that the human visual system includes an ultra-rapid and continuous object localization system for upright faces. Furthermore, these observations indicate that continuous paradigms such as the one we have used can push humans to make remarkably fast reaction times that impose strong constraints and challenges on models of how, where, and when visual processing occurs in the human brain.
Collapse
|
104
|
Kramer RSS, Mileva M, Ritchie KL. Inter-rater agreement in trait judgements from faces. PLoS One 2018; 13:e0202655. [PMID: 30118520 PMCID: PMC6097668 DOI: 10.1371/journal.pone.0202655] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2018] [Accepted: 08/07/2018] [Indexed: 11/19/2022] Open
Abstract
Researchers have long been interested in how social evaluations are made based upon first impressions of faces. It is also important to consider the level of agreement we see in such evaluations across raters and what this may tell us. Typically, high levels of inter-rater agreement for facial judgements are reported, but the measures used may be misleading. At present, studies commonly report Cronbach's α as a way to quantify agreement, although problematically, there are various issues with the use of this measure. Most importantly, because researchers treat raters as items, Cronbach's α is inflated by larger sample sizes even when agreement between raters is fixed. Here, we considered several alternative measures and investigated whether these better discriminate between traits that were predicted to show low (parental resemblance), intermediate (attractiveness, dominance, trustworthiness), and high (age, gender) levels of agreement. Importantly, the level of inter-rater agreement has not previously been studied for many of these traits. In addition, we investigated whether familiar faces resulted in differing levels of agreement in comparison with unfamiliar faces. Our results suggest that alternative measures may prove more informative than Cronbach's α when determining how well raters agree in their judgements. Further, we found no apparent influence of familiarity on levels of agreement. Finally, we show that, like attractiveness, both trustworthiness and dominance show significant levels of private taste (personal or idiosyncratic rater perceptions), although shared taste (perceptions shared with other raters) explains similar levels of variance in people's perceptions. In conclusion, we recommend that researchers investigating social judgements of faces consider alternatives to Cronbach's α but should also be prepared to examine both the potential value and origin of private taste as these might prove informative.
Collapse
|
105
|
Brinker TJ, Brieske CM, Esser S, Klode J, Mons U, Batra A, Rüther T, Seeger W, Enk AH, von Kalle C, Berking C, Heppt MV, Gatzka MV, Bernardes-Souza B, Schlenk RF, Schadendorf D. A Face-Aging App for Smoking Cessation in a Waiting Room Setting: Pilot Study in an HIV Outpatient Clinic. J Med Internet Res 2018; 20:e10976. [PMID: 30111525 PMCID: PMC6115598 DOI: 10.2196/10976] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 06/22/2018] [Accepted: 07/10/2018] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND There is strong evidence for the effectiveness of addressing tobacco use in health care settings. However, few smokers receive cessation advice when visiting a hospital. Implementing smoking cessation technology in outpatient waiting rooms could be an effective strategy for change, with the potential to expose almost all patients visiting a health care provider without preluding physician action needed. OBJECTIVE The objective of this study was to develop an intervention for smoking cessation that would make use of the time patients spend in a waiting room by passively exposing them to a face-aging, public morphing, tablet-based app, to pilot the intervention in a waiting room of an HIV outpatient clinic, and to measure the perceptions of this intervention among smoking and nonsmoking HIV patients. METHODS We developed a kiosk version of our 3-dimensional face-aging app Smokerface, which shows the user how their face would look with or without cigarette smoking 1 to 15 years in the future. We placed a tablet with the app running on a table in the middle of the waiting room of our HIV outpatient clinic, connected to a large monitor attached to the opposite wall. A researcher noted all the patients who were using the waiting room. If a patient did not initiate app use within 30 seconds of waiting time, the researcher encouraged him or her to do so. Those using the app were asked to complete a questionnaire. RESULTS During a 19-day period, 464 patients visited the waiting room, of whom 187 (40.3%) tried the app and 179 (38.6%) completed the questionnaire. Of those who completed the questionnaire, 139 of 176 (79.0%) were men and 84 of 179 (46.9%) were smokers. Of the smokers, 55 of 81 (68%) said the intervention motivated them to quit (men: 45, 68%; women: 10, 67%); 41 (51%) said that it motivated them to discuss quitting with their doctor (men: 32, 49%; women: 9, 60%); and 72 (91%) perceived the intervention as fun (men: 57, 90%; women: 15, 94%). Of the nonsmokers, 92 (98%) said that it motivated them never to take up smoking (men: 72, 99%; women: 20, 95%). Among all patients, 102 (22.0%) watched another patient try the app without trying it themselves; thus, a total of 289 (62.3%) of the 464 patients were exposed to the intervention (average waiting time 21 minutes). CONCLUSIONS A face-aging app implemented in a waiting room provides a novel opportunity to motivate patients visiting a health care provider to quit smoking, to address quitting at their subsequent appointment and thereby encourage physician-delivered smoking cessation, or not to take up smoking.
Collapse
|
106
|
Laeng B, Kiambarua KG, Hagen T, Bochynska A, Lubell J, Suzuki H, Okubo M. The "face race lightness illusion": An effect of the eyes and pupils? PLoS One 2018; 13:e0201603. [PMID: 30071065 PMCID: PMC6072068 DOI: 10.1371/journal.pone.0201603] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 07/18/2018] [Indexed: 12/20/2022] Open
Abstract
In an internet-based, forced-choice, test of the ‘face race lightness illusion’, the majority of respondents, regardless of their ethnicity, reported perceiving the African face as darker in skin tone than the European face, despite the mean luminance, contrast and numbers of pixels of the images were identical. In the laboratory, using eye tracking, it was found that eye fixations were distributed differently on the African face and European face, so that gaze dwelled relatively longer onto the locally brighter regions of the African face and, in turn, mean pupil diameters were smaller than for the European face. There was no relationship between pupils’ size and implicit social attitude (IAT) scores. In another experiment, the faces were presented either tachistoscopically (140 ms) or longer (2500 ms) so that, when gaze was prevented from looking directly at the faces in the former condition, the tendency to report the African face as “dark” disappeared, but it was present when gaze was free to move for just a few seconds. We conclude that the presence of the illusion depends on oculomotor behavior and we also propose a novel account based on a predictive strategy of sensory acquisition. Specifically, by differentially directing gaze towards to facial regions that are locally different in luminance, the resulting changes in retinal illuminance yield respectively darker or brighter percepts while attending to each face, hence minimizing the mismatch between visual input and the learned perceptual prototypes of ethnic categories.
Collapse
|
107
|
Pavlidis I, Garza I, Tsiamyrtzis P, Dcosta M, Swanson JW, Krouskop T, Levine JA. Dynamic Quantification of Migrainous Thermal Facial Patterns - A Pilot Study. IEEE J Biomed Health Inform 2018; 23:1225-1233. [PMID: 30004895 DOI: 10.1109/jbhi.2018.2855670] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This article documents thermophysiological patterns associated with migraine episodes, where the inner canthi and supraorbital temperatures drop significantly compared to normal conditions. These temperature drops are likely due to vasoconstriction of the ophthalmic arteries under the inner canthi and sympathetic activation of the eccrine glands in the supraorbital region, respectively. The thermal patterns were observed on eight migraine patients and meticulously quantified using advance computational methods, capable of delineating small anatomical structures in thermal imagery and tracking them automatically over time. These methods open the way for monitoring migraine episodes in nonclinical environments, where the patient maintains directional attention, such as his/her computer at home or at work. This development has the potential to significantly expand the operational envelope of migraine studies.
Collapse
|
108
|
Marcinkowska UM, Jasienska G, Prokop P. A Comparison of Masculinity Facial Preference Among Naturally Cycling, Pregnant, Lactating, and Post-Menopausal Women. ARCHIVES OF SEXUAL BEHAVIOR 2018; 47:1367-1374. [PMID: 29071543 PMCID: PMC5954065 DOI: 10.1007/s10508-017-1093-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 09/27/2017] [Accepted: 10/03/2017] [Indexed: 05/26/2023]
Abstract
Women show cyclical shifts in preferences for physical male traits. Here we investigated how fertility status influences women's facial masculinity preference in men by analyzing a large sample of heterosexual women (N = 3720). Women were regularly either cycling (in both low- and high-conception probability groups), lactating or were currently in a non-fertile state (pregnant or post-menopausal). Analyses simultaneously controlled for women's age and sexual openness. Participants via two alternative forced choice questions judged attractiveness of masculinized and feminized men's faces. After controlling for the effect of age and sociosexuality, regularly cycling and pregnant women showed a stronger preference for masculinity than lactating and post-menopausal women. However, there was no significant difference in masculinity preference between women in the low- and high-conception probability groups. Women's sociosexuality showed a positive, but very weak association with men's facial masculinity preference. We suggest that women's overall, long-term hormonal state (cycling, post-menopausal) is a stronger predictor of preference for sexual dimorphism than changes in hormonal levels through the cycle.
Collapse
|
109
|
Vojtech JM, Cler GJ, Stepp CE. Prediction of Optimal Facial Electromyographic Sensor Configurations for Human-Machine Interface Control. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1566-1576. [PMID: 29994124 DOI: 10.1109/tnsre.2018.2849202] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Surface electromyography (sEMG) is a promising computer access method for individuals with motor impairments. However, optimal sensor placement is a tedious task requiring trial-and-error by an expert, particularly when recording from facial musculature likely to be spared in individuals with neurological impairments. We sought to reduce the sEMG sensor configuration complexity by using quantitative signal features extracted from a short calibration task to predict human-machine interface (HMI) performance. A cursor control system allowed individuals to activate specific sEMG-targeted muscles to control an onscreen cursor and navigate a target selection task. The task was repeated for a range of sensor configurations to elicit a range of signal qualities. Signal features were extracted from the calibration of each configuration and examined via a principle component factor analysis in order to predict the HMI performance during subsequent tasks. Feature components most influenced by the energy and the complexity of the EMG signal and muscle activity between the sensors were significantly predictive of the HMI performance. However, configuration order had a greater effect on performance than the configurations, suggesting that non-experts can place sEMG sensors in the vicinity of usable muscle sites for computer access and healthy individuals will learn to efficiently control the HMI system.
Collapse
|
110
|
Fu Y, Selcuk E, Moore SR, Depue RA. Touch-induced face conditioning is mediated by genetic variation in opioid but not oxytocin receptors. Sci Rep 2018; 8:9004. [PMID: 29899398 PMCID: PMC5998070 DOI: 10.1038/s41598-018-27199-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 05/15/2018] [Indexed: 12/26/2022] Open
Abstract
Soft touch possesses strong prosocial effects that facilitate social bonding and group cohesion in animals. Touch activates opioids (OP) and oxytocin (OXT), two neuromodulators involved in affiliative behaviors and social bonding. We examined whether touch serves as an unconditioned reward in affective conditioning of human faces, a basic process in social bonding, and whether this process is mediated by variation in mu-OP (OPRM1) and OXT (rs53576) receptor genes. Participants viewed affectively-neutral human faces, half of which were paired with a brief soft brushing on the forearm as an unconditioned stimulus (US). Paired and unpaired faces were rated for positive affective and sensory features of touch. Variation in OPRM1 but not rs53576 significantly modulated strength and development of conditioning, indicating that touch-induced mu-OP but not OXT activity provides rewarding properties of a US in conditioning. Implications for touch-induced mu-OP activity in normal and disordered conditioned social bonding are discussed.
Collapse
MESH Headings
- Analgesics, Opioid/metabolism
- Conditioning, Operant/physiology
- Face/physiology
- Female
- Genotype
- Humans
- Male
- Oxytocin/metabolism
- Photic Stimulation
- Receptors, Opioid, mu/genetics
- Receptors, Opioid, mu/metabolism
- Receptors, Opioid, mu/physiology
- Receptors, Oxytocin/genetics
- Receptors, Oxytocin/metabolism
- Receptors, Oxytocin/physiology
- Reward
- Social Behavior
- Touch/physiology
- Young Adult
Collapse
|
111
|
Wang H, Song W, Liu W, Song N, Wang Y, Pan H. A Bayesian Scene-Prior-Based Deep Network Model for Face Verification. SENSORS 2018; 18:s18061906. [PMID: 29891830 PMCID: PMC6022064 DOI: 10.3390/s18061906] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Revised: 06/03/2018] [Accepted: 06/08/2018] [Indexed: 11/16/2022]
Abstract
Face recognition/verification has received great attention in both theory and application for the past two decades. Deep learning has been considered as a very powerful tool for improving the performance of face recognition/verification recently. With large labeled training datasets, the features obtained from deep learning networks can achieve higher accuracy in comparison with shallow networks. However, many reported face recognition/verification approaches rely heavily on the large size and complete representative of the training set, and most of them tend to suffer serious performance drop or even fail to work if fewer training samples per person are available. Hence, the small number of training samples may cause the deep features to vary greatly. We aim to solve this critical problem in this paper. Inspired by recent research in scene domain transfer, for a given face image, a new series of possible scenarios about this face can be deduced from the scene semantics extracted from other face individuals in a face dataset. We believe that the "scene" or background in an image, that is, samples with more different scenes for a given person, may determine the intrinsic features among the faces of the same individual. In order to validate this belief, we propose a Bayesian scene-prior-based deep learning model in this paper with the aim to extract important features from background scenes. By learning a scene model on the basis of a labeled face dataset via the Bayesian idea, the proposed method transforms a face image into new face images by referring to the given face with the learnt scene dictionary. Because the new derived faces may have similar scenes to the input face, the face-verification performance can be improved without having background variance, while the number of training samples is significantly reduced. Experiments conducted on the Labeled Faces in the Wild (LFW) dataset view #2 subset illustrated that this model can increase the verification accuracy to 99.2% by means of scenes' transfer learning (99.12% in literature with an unsupervised protocol). Meanwhile, our model can achieve 94.3% accuracy for the YouTube Faces database (DB) (93.2% in literature with an unsupervised protocol).
Collapse
|
112
|
Gonzalez Viejo C, Fuentes S, Torrico DD, Dunshea FR. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate. SENSORS (BASEL, SWITZERLAND) 2018; 18:E1802. [PMID: 29865289 PMCID: PMC6022164 DOI: 10.3390/s18061802] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 05/27/2018] [Accepted: 05/27/2018] [Indexed: 11/24/2022]
Abstract
Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.
Collapse
|
113
|
Sinko K, Tran US, Wutzl A, Seemann R, Millesi G, Jagsch R. Perception of aesthetics and personality traits in orthognathic surgery patients: A comparison of still and moving images. PLoS One 2018; 13:e0196856. [PMID: 29775466 PMCID: PMC5959192 DOI: 10.1371/journal.pone.0196856] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2018] [Accepted: 04/21/2018] [Indexed: 12/23/2022] Open
Abstract
It is common in practicing orthognathic surgery to evaluate faces with retruded or protruded chins (dysgnathic faces) using photographs. Because motion may alter how the face is perceived, we investigated the perception of faces presented via photographs and videos. Two hundred naïve raters (lay persons, without maxillo facial surgery background) evaluated 12 subjects with varying chin anatomy [so-called skeletal Class I (normal chin), Class II (retruded chin), and Class III (protruded chin)]. Starting from eight traits, with Factor analysis we found a two-Factor solution, i.e. an "aesthetics associated traits cluster" and a Factor "personality traits cluster" which appeared to be uncorrelated. Internal consistency of the Factors found for photographs and videos was excellent. Generally, female raters delivered better ratings than males, but the effect sizes were small. We analyzed differences and the respective effect magnitude between photograph and video perception. For each skeletal class the aesthetics associated dimensions were rated similarly between photographs and video clips. In contrast, specific personality traits were rated differently. Differences in the class-specific personality traits seen on photographs were "smoothed" in the assessment of videos, which implies that photos enhance stereotypes commonly attributed to a retruded or protruded chin.
Collapse
|
114
|
El Haj M, Daoudi M, Gallouj K, Moustafa AA, Nandrino JL. When your face describes your memories: facial expressions during retrieval of autobiographical memories. Rev Neurosci 2018; 29:861-872. [PMID: 29750658 DOI: 10.1515/revneuro-2018-0001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 02/13/2018] [Indexed: 11/15/2022]
Abstract
Abstract
Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.
Collapse
|
115
|
Ahmedt-Aristizabal D, Fookes C, Nguyen K, Denman S, Sridharan S, Dionisio S. Deep facial analysis: A new phase I epilepsy evaluation using computer vision. Epilepsy Behav 2018; 82:17-24. [PMID: 29574299 DOI: 10.1016/j.yebeh.2018.02.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/07/2018] [Accepted: 02/14/2018] [Indexed: 11/20/2022]
Abstract
Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery.
Collapse
|
116
|
Shu X, Tang J, Li Z, Lai H, Zhang L, Yan S. Personalized Age Progression with Bi-Level Aging Dictionary Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:905-917. [PMID: 28534768 DOI: 10.1109/tpami.2017.2705122] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.
Collapse
|
117
|
Dibeklioglu H, Hammal Z, Cohn JF. Dynamic Multimodal Measurement of Depression Severity Using Deep Autoencoding. IEEE J Biomed Health Inform 2018; 22:525-536. [PMID: 28278485 PMCID: PMC5581737 DOI: 10.1109/jbhi.2017.2676878] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Depression is one of the most common psychiatric disorders worldwide, with over 350 million people affected. Current methods to screen for and assess depression depend almost entirely on clinical interviews and self-report scales. While useful, such measures lack objective, systematic, and efficient ways of incorporating behavioral observations that are strong indicators of depression presence and severity. Using dynamics of facial and head movement and vocalization, we trained classifiers to detect three levels of depression severity. Participants were a community sample diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at seven-week intervals over a period of 21 weeks. At each interview, they were scored by the HRSD as moderately to severely depressed, mildly depressed, or remitted. Logistic regression classifiers using leave-one-participant-out validation were compared for facial movement, head movement, and vocal prosody individually and in combination. Accuracy of depression severity measurement from facial movement dynamics was higher than that for head movement dynamics, and each was substantially higher than that for vocal prosody. Accuracy using all three modalities combined only marginally exceeded that of face and head combined. These findings suggest that automatic detection of depression severity from behavioral indicators in patients is feasible and that multimodal measures afford the most powerful detection.
Collapse
|
118
|
Matsukawa K, Endo K, Ishii K, Ito M, Liang N. Facial skin blood flow responses during exposures to emotionally charged movies. J Physiol Sci 2018; 68:175-190. [PMID: 28110456 PMCID: PMC10717512 DOI: 10.1007/s12576-017-0522-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2016] [Accepted: 01/10/2017] [Indexed: 11/28/2022]
Abstract
The changes in regional facial skin blood flow and vascular conductance have been assessed for the first time with noninvasive two-dimensional laser speckle flowmetry during audiovisually elicited emotional challenges for 2 min (comedy, landscape, and horror movie) in 12 subjects. Limb skin blood flow and vascular conductance and systemic cardiovascular variables were simultaneously measured. The extents of pleasantness and consciousness for each emotional stimulus were estimated by the subjective rating from -5 (the most unpleasant; the most unconscious) to +5 (the most pleasant; the most conscious). Facial skin blood flow and vascular conductance, especially in the lips, decreased during viewing of comedy and horror movies, whereas they did not change during viewing of a landscape movie. The decreases in facial skin blood flow and vascular conductance were the greatest with the comedy movie. The changes in lip, cheek, and chin skin blood flow negatively correlated (P < 0.05) with the subjective ratings of pleasantness and consciousness. The changes in lip skin vascular conductance negatively correlated (P < 0.05) with the subjective rating of pleasantness, while the changes in infraorbital, subnasal, and chin skin vascular conductance negatively correlated (P < 0.05) with the subjective rating of consciousness. However, none of the changes in limb skin blood flow and vascular conductance and systemic hemodynamics correlated with the subjective ratings. The mental arithmetic task did not alter facial and limb skin blood flows, although the task influenced systemic cardiovascular variables. These findings suggest that the more emotional status becomes pleasant or conscious, the more neurally mediated vasoconstriction may occur in facial skin blood vessels.
Collapse
|
119
|
Marinescu AC, Sharples S, Ritchie AC, Sánchez López T, McDowell M, Morvan HP. Physiological Parameter Response to Variation of Mental Workload. HUMAN FACTORS 2018; 60:31-56. [PMID: 28965433 PMCID: PMC5777546 DOI: 10.1177/0018720817733101] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
OBJECTIVE To examine the relationship between experienced mental workload and physiological response by noninvasive monitoring of physiological parameters. BACKGROUND Previous studies have examined how individual physiological measures respond to changes in mental demand and subjective reports of workload. This study explores the response of multiple physiological parameters and quantifies their added value when estimating the level of demand. METHOD The study presented was conducted in laboratory conditions and required participants to perform a visual-motor task that imposed varying levels of demand. The data collected consisted of physiological measurements (heart interbeat intervals, breathing rate, pupil diameter, facial thermography), subjective ratings of workload (Instantaneous Self-Assessment Workload Scale [ISA] and NASA-Task Load Index), and the performance. RESULTS Facial thermography and pupil diameter were demonstrated to be good candidates for noninvasive workload measurements: For seven out of 10 participants, pupil diameter showed a strong correlation ( R values between .61 and .79 at a significance value of .01) with mean ISA normalized values. Facial thermography measures added on average 47.7% to the amount of variability in task performance explained by a regression model. As with the ISA ratings, the relationship between the physiological measures and performance showed strong interparticipant differences, with some individuals demonstrating a much stronger relationship between workload and performance measures than others. CONCLUSION The results presented in this paper demonstrate that physiological and pupil diameter can be used for noninvasive real-time measurement of workload. APPLICATION The methods presented in this article, with current technological capabilities, are better suited for workplaces where the person is seated, offering the possibility of being applied to pilots and air traffic controllers.
Collapse
|
120
|
Zebrowitz LA, Ward N, Boshyan J, Gutchess A, Hadjikhani N. Older adults' neural activation in the reward circuit is sensitive to face trustworthiness. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:21-34. [PMID: 29214437 PMCID: PMC7598091 DOI: 10.3758/s13415-017-0549-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined older adult (OA) and younger adult (YA) neural sensitivity to face trustworthiness in reward circuit regions, previously found to respond to trustworthiness in YA. Interactions of face trustworthiness with age revealed effects exclusive to OA in the amygdala and caudate, and an effect that was not moderated by age in the dorsal anterior cingulate cortex (dACC). OA, but not YA, showed a nonlinear amygdala response to face trustworthiness, with significantly stronger activation response to high than to medium trustworthy faces, and no difference between low and medium or high. This may explain why an earlier study investigating OA amygdala activation to trustworthiness failed to find a significant effect, since only the linear low versus high trustworthiness difference was assessed. OA, but not YA, also showed significantly stronger activation to high than to low trustworthy faces in the right caudate, indicating a positive linear effect, consistent with previous YA research, as well as significantly stronger activation to high than to medium but not low trustworthy faces in the left caudate, indicating a nonlinear effect. Activation in dACC across both age groups showed a positive linear effect consistent with previous YA research. Finally, OA rated the faces as more trustworthy than did YA across all levels of trustworthiness. Future research should examine whether the null effects for YA were due to our inclusion of older faces. Research also should investigate possible implications of our findings for more ecologically valid OA responses to people who vary in facial trustworthiness.
Collapse
|
121
|
Kim SH, Hwang S, Hong YJ, Kim JJ, Kim KH, Chung CJ. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile. Angle Orthod 2018; 88:329-337. [PMID: 29376732 DOI: 10.2319/080717-528.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. MATERIALS AND METHODS Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" RESULTS When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. CONCLUSIONS Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.
Collapse
|
122
|
Quinto-Sánchez M, Muñoz-Muñoz F, Gomez-Valdes J, Cintas C, Navarro P, Cerqueira CCSD, Paschetta C, de Azevedo S, Ramallo V, Acuña-Alonzo V, Adhikari K, Fuentes-Guajardo M, Hünemeier T, Everardo P, de Avila F, Jaramillo C, Arias W, Gallo C, Poletti G, Bedoya G, Bortolini MC, Canizales-Quinteros S, Rothhammer F, Rosique J, Ruiz-Linares A, Gonzalez-Jose R. Developmental pathways inferred from modularity, morphological integration and fluctuating asymmetry patterns in the human face. Sci Rep 2018; 8:963. [PMID: 29343858 PMCID: PMC5772513 DOI: 10.1038/s41598-018-19324-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 12/15/2017] [Indexed: 01/25/2023] Open
Abstract
Facial asymmetries are usually measured and interpreted as proxies to developmental noise. However, analyses focused on its developmental and genetic architecture are scarce. To advance on this topic, studies based on a comprehensive and simultaneous analysis of modularity, morphological integration and facial asymmetries including both phenotypic and genomic information are needed. Here we explore several modularity hypotheses on a sample of Latin American mestizos, in order to test if modularity and integration patterns differ across several genomic ancestry backgrounds. To do so, 4104 individuals were analyzed using 3D photogrammetry reconstructions and a set of 34 facial landmarks placed on each individual. We found a pattern of modularity and integration that is conserved across sub-samples differing in their genomic ancestry background. Specifically, a signal of modularity based on functional demands and organization of the face is regularly observed across the whole sample. Our results shed more light on previous evidence obtained from Genome Wide Association Studies performed on the same samples, indicating the action of different genomic regions contributing to the expression of the nose and mouth facial phenotypes. Our results also indicate that large samples including phenotypic and genomic metadata enable a better understanding of the developmental and genetic architecture of craniofacial phenotypes.
Collapse
|
123
|
Coutinho E, Gentsch K, van Peer J, Scherer KR, Schuller BW. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography. PLoS One 2018; 13:e0189367. [PMID: 29293572 PMCID: PMC5749688 DOI: 10.1371/journal.pone.0189367] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 11/23/2017] [Indexed: 11/19/2022] Open
Abstract
In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.
Collapse
|
124
|
Mishima K, Yamada T, Fujiwara K, Sugahara T. Development and Clinical Usage of a Motion Analysis System for the Face: Preliminary Report. Cleft Palate Craniofac J 2017; 41:559-64. [PMID: 15352855 DOI: 10.1597/03-079.1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Objective To evaluate the motion of the face and jaw of patients with cleft lip and palate, facial palsy, and in patients after reconstruction, a motion-analyzing system was developed. The aim of this article was to investigate the accuracy of this system and the possibility of clinical application. Methods Markers of 1 to 2 mm were placed on the face, and motion images were obtained by three digital video cameras controlled by a synchronizer and recorded on digital video tape. The image was processed on a personal computer. The markers were automatically tracked across the image sequences, and their three-dimensional coordinates were then calculated. Main Outcome Measures System accuracy was investigated using a positioning actuator with high accuracy and a known object. In three patients with bilateral cleft lip and palate, lip pursing was analyzed using the aforesaid method. Results and Conclusions The mean differences from the known values to the distances between the tracked sample points and to the mobile distances of the sample points per frame were 0.24 to 0.36 mm and 0.02 to 0.05 mm, respectively. Both results were similar regardless of the mobile speed or direction. In five repeated measurements, the mean differences from the known values as for the distances and the mobile speed ranged from 0.19 to 0.38 mm and from 0.00 to 0.07 mm, respectively. Examination of three patients with bilateral cleft lip and palate indicated the possibility that lip movement could be successfully analyzed using the present system.
Collapse
|
125
|
Wen Q, Xu F, Yong JH. Real-Time 3D Eye Performance Reconstruction for RGBD Cameras. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:2586-2598. [PMID: 28026772 DOI: 10.1109/tvcg.2016.2641442] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper proposes a real-time method for 3D eye performance reconstruction using a single RGBD sensor. Combined with facial surface tracking, our method generates more pleasing facial performance with vivid eye motions. In our method, a novel scheme is proposed to estimate eyeball motions by minimizing the differences between a rendered eyeball and the recorded image. Our method considers and handles different appearances of human irises, lighting variations and highlights on images via the proposed eyeball model and the -based optimization. Robustness and real-time optimization are achieved through the novel 3D Taylor expansion-based linearization. Furthermore, we propose an online bidirectional regression method to handle occlusions and other tracking failures on either of the two eyes from the information of the opposite eye. Experiments demonstrate that our technique achieves robust and accurate eye performance reconstruction for different iris appearances, with various head/face/eye motions, and under different lighting conditions.
Collapse
|