1
|
Mares I, Smith FW, Goddard EJ, Keighery L, Pappasava M, Ewing L, Smith ML. Effects of expectation on face perception and its association with expertise. Sci Rep 2024; 14:9402. [PMID: 38658575 PMCID: PMC11043383 DOI: 10.1038/s41598-024-59284-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.
Collapse
Affiliation(s)
- Inês Mares
- School of Psychological Sciences, Birkbeck College, University of London, London, UK.
- William James Center for Research, Ispa - Instituto Universitário, Lisbon, Portugal.
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK
| | - E J Goddard
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| | - Lianne Keighery
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Department of Clinical and Movement Neurosciences, Queen Square Institute of Neurology, University College London, London, UK
| | - Michael Pappasava
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Centre for Genomics and Child Health, Blizard Institute, Queen Mary University of London, London, UK
| | - Louise Ewing
- School of Psychology, University of East Anglia, Norwich, UK
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| |
Collapse
|
2
|
Talala S, Shvimmer S, Simhon R, Gilead M, Yitzhaky Y. Emotion Classification Based on Pulsatile Images Extracted from Short Facial Videos via Deep Learning. Sensors (Basel) 2024; 24:2620. [PMID: 38676235 PMCID: PMC11053953 DOI: 10.3390/s24082620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024]
Abstract
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.
Collapse
Affiliation(s)
- Shlomi Talala
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| | - Shaul Shvimmer
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| | - Rotem Simhon
- School of Psychology, Tel Aviv University, Tel Aviv 39040, Israel
| | - Michael Gilead
- School of Psychology, Tel Aviv University, Tel Aviv 39040, Israel
| | - Yitzhak Yitzhaky
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| |
Collapse
|
3
|
Cheng X, Wang S, Wei H, Sun X, Xin L, Li L, Li C, Wang Z. Application of Stereo Digital Image Correlation on Facial Expressions Sensing. Sensors (Basel) 2024; 24:2450. [PMID: 38676067 PMCID: PMC11054127 DOI: 10.3390/s24082450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/06/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.
Collapse
Affiliation(s)
- Xuanshi Cheng
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Shibin Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Huixin Wei
- School of Civil Engineering and Architecture, Nanchang University, Nanchang 330000, China
| | - Xin Sun
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Lipan Xin
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Linan Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Chuanwei Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Zhiyong Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| |
Collapse
|
4
|
Wingenbach TSH, Ribeiro B, Nakao C, Boggio PS. Modulation of facial muscle responses by another person's presence and affiliative touch during affective image viewing. Cogn Emot 2024; 38:59-70. [PMID: 37712676 DOI: 10.1080/02699931.2023.2258588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 06/06/2023] [Accepted: 06/13/2023] [Indexed: 09/16/2023]
Abstract
Stimulating CT-afferents by forearm caresses produces the subjective experience of pleasantness in the receiver and modulates subjective evaluations of viewed affective images. Receiving touch from another person includes the social element of another person's presence, which has been found to influence affective image evaluations without involving touch. The current study investigated whether these modulations translate to facial muscle responses associated with positive and negative affect across touch-involving and mere presence conditions. Female participants (N = 40, M(age) = 22.4, SD = 5.3) watched affective images (neutral, positive, negative) while facial electromyography was recorded (sites: zygomaticus, corrugator). Results from ANOVAs showed that providing touch to another person or oneself modulated zygomaticus site responses when viewing positive images. Providing CT-afferent stimulating touch (i.e., forearm caresses) to another person or oneself dampened the positive affective facial muscle response to positive affective images. Providing touch to another person generally increased corrugator facial muscle activity related to negative affect. Receiving touch did not modulate affective facial muscle responses during the viewing of affective images but may have effects on later cognitive processes. Together, previously reported social and touch modulations of subjective evaluations of affective images do not translate to facial muscle responses during affective image viewing, which were differentially modulated.
Collapse
Affiliation(s)
- Tanja S H Wingenbach
- Centre for Health and Biological Sciences, Social and Cognitive Neuroscience Laboratory, Mackenzie Presbyterian University, Sao Paulo, Brazil
- Faculty of Medicine, University of Zurich, Zurich, Switzerland
- Department of Consultation-Liaison Psychiatry and Psychosomatic Medicine, University Hospital Zurich, Zurich, Switzerland
- Faculty of Education, Health, and Human Sciences, School of Human Sciences, University of Greenwich, London, UK
| | - Beatriz Ribeiro
- Centre for Health and Biological Sciences, Social and Cognitive Neuroscience Laboratory, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Caroline Nakao
- Centre for Health and Biological Sciences, Social and Cognitive Neuroscience Laboratory, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Paulo S Boggio
- Centre for Health and Biological Sciences, Social and Cognitive Neuroscience Laboratory, Mackenzie Presbyterian University, Sao Paulo, Brazil
- National Institute of Science and Technology on Social and Affective Neuroscience, CNPq, Brazil
| |
Collapse
|
5
|
Fernández J, Albayay J, Gálvez-García G, Iborra O, Huertas C, Gómez-Milán E, Caballo VE. Facial infrared thermography as an index of social anxiety. Anxiety Stress Coping 2024; 37:114-126. [PMID: 37029987 DOI: 10.1080/10615806.2023.2199209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 03/29/2023] [Indexed: 04/09/2023]
Abstract
Previous research on physiological indices of social anxiety has offered unclear results. In this study, participants with low and high social anxiety performed five social interaction tasks while being recorded with a thermal camera. Each task was associated with a dimension assessed by the Social Anxiety Questionnaire for Adults (1 = Interactions with strangers. 2 = Speaking in public/Talking with people in authority, 3 = Criticism and embarrassment, 4 = Assertive expression of annoyance, disgust or displeasure, 5 = Interactions with the opposite sex). Mixed-effects models revealed that the temperature of the tip of the nose decreased significantly in participants with low (vs. high) social anxiety (p < 0.001), while no significant differences were found in other facial regions of interest: forehead (p = 0.999) and cheeks (p = 0.999). Furthermore, task 1 was the most effective at discriminating between the thermal change of the nose tip and social anxiety, with a trend for a higher nose temperature in participants with high social anxiety and a lower nose temperature for the low social anxiety group. We emphasize the importance of corroborating thermography with specific tasks as an ecological method, and tip of the nose thermal change as a psychophysiological index associated with social anxiety.
Collapse
Affiliation(s)
- Jesús Fernández
- Centro de Investigación Mente, Cerebro y Comportamiento, Universidad de Granada, Granada, Spain
| | - Javier Albayay
- Centro Interdipartimentale Mente/Cervello, Università degli Studi di Trento, Rovereto, Italy
| | - Germán Gálvez-García
- Departamento de Psicología, Universidad de La Frontera, Temuco, Chile
- Departamento de Psicología Básica, Psicobiología y Metodología de las Ciencias del Comportamiento, Facultad de Psicología, Universidad de Salamanca, Salamanca, Spain
| | - Oscar Iborra
- Centro de Investigación Mente, Cerebro y Comportamiento, Universidad de Granada, Granada, Spain
| | - Carmen Huertas
- Centro de Investigación Mente, Cerebro y Comportamiento, Universidad de Granada, Granada, Spain
| | - Emilio Gómez-Milán
- Centro de Investigación Mente, Cerebro y Comportamiento, Universidad de Granada, Granada, Spain
| | - Vicente E Caballo
- Centro de Investigación Mente, Cerebro y Comportamiento, Universidad de Granada, Granada, Spain
| |
Collapse
|
6
|
Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
Affiliation(s)
- Paula L. Plaza
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Laurent Renier
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Stephanie Rosemann
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Anne G. De Volder
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Josef P. Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
7
|
Abstract
Facial expressions are an increasingly used tool to assess emotional experience and affective state during experimental procedures in animal models. Previous studies have successfully related specific facial features with different positive and negative valence situations, most notably in relation to pain. However, characterizing and interpreting such expressions remains a major challenge. We identified seven easily visualizable facial parameters on mouse profiles, accounting for changes in eye, ear, mouth, snout and face orientation. We monitored their relative position on the face across time and throughout sequences of positive and aversive gustatory and somatosensory stimuli in freely moving mice. Facial parameters successfully captured response profiles to each stimulus and reflected spontaneous movements in response to stimulus valence, as well as contextual elements such as habituation. Notably, eye opening was increased by palatable tastants and innocuous touch, while this parameter was reduced by tasting a bitter solution and by painful stimuli. Mouse ear posture appears to convey a large part of emotional information. Facial expressions accurately depicted welfare and affective state in a time-sensitive manner, successfully correlating time-dependent stimulation. This study is the first to delineate rodent facial expression features in multiple positive valence situations, including in relation to affective touch. We suggest using this facial expression assay might provide mechanistic insights into emotional expression and improve the translational value of experimental studies in rodents on pain and other states.
Collapse
Affiliation(s)
- Olivia Le Moëne
- Division of Neurobiology, Department of Biomedical and Clinical Sciences, Linköping University, Linköping 581 83, Sweden
| | - Max Larsson
- Division of Neurobiology, Department of Biomedical and Clinical Sciences, Linköping University, Linköping 581 83, Sweden
| |
Collapse
|
8
|
Li S, Xiao K, Li P. Spectra Reconstruction for Human Facial Color from RGB Images via Clusters in 3D Uniform CIELab* and Its Subordinate Color Space. Sensors (Basel) 2023; 23:s23020810. [PMID: 36679603 PMCID: PMC9861444 DOI: 10.3390/s23020810] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 12/13/2022] [Accepted: 12/15/2022] [Indexed: 06/12/2023]
Abstract
Previous research has demonstrated the potential to reconstruct human facial skin spectra based on the responses of RGB cameras to achieve high-fidelity color reproduction of human facial skin in various industrial applications. Nonetheless, the level of precision is still expected to improve. Inspired by the asymmetricity of human facial skin color in the CIELab* color space, we propose a practical framework, HPCAPR, for skin facial reflectance reconstruction based on calibrated datasets which reconstruct the facial spectra in subsets derived from clustering techniques in several spectrometric and colorimetric spaces, i.e., the spectral reflectance space, Principal Component (PC) space, CIELab*, and its three 2D subordinate color spaces, La*, Lb*, and ab*. The spectra reconstruction algorithm is optimized by combining state-of-art algorithms and thoroughly scanning the parameters. The results show that the hybrid of PCA and RGB polynomial regression algorithm with 3PCs plus 1st-order polynomial extension gives the best results. The performance can be improved substantially by operating the spectral reconstruction framework within the subset classified in the La* color subspace. Comparing with not conducting the clustering technique, it attains values of 25.2% and 57.1% for the median and maximum errors for the best cluster, respectively; for the worst, the maximum error was reduced by 42.2%.
Collapse
Affiliation(s)
- Suixian Li
- Flying College, Binzhou University, Binzhou 256600, China
- School of Design, University of Leeds, Leeds LS2 9JT, UK
| | - Kaida Xiao
- School of Design, University of Leeds, Leeds LS2 9JT, UK
| | - Pingqi Li
- School of Informatic, University of Edinburg, Edinburgh EH8 9YL, UK
| |
Collapse
|
9
|
Liu J, Hui B, Li K, Liu Y, Lai YK, Zhang Y, Liu Y, Yang J. Geometry-Guided Dense Perspective Network for Speech-Driven Facial Animation. IEEE Trans Vis Comput Graph 2022; 28:4873-4886. [PMID: 34449390 DOI: 10.1109/tvcg.2021.3107669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet), to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet.
Collapse
|
10
|
Gat L, Gerston A, Shikun L, Inzelberg L, Hanein Y. Similarities and disparities between visual analysis and high-resolution electromyography of facial expressions. PLoS One 2022; 17:e0262286. [PMID: 35192638 PMCID: PMC8863227 DOI: 10.1371/journal.pone.0262286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/21/2021] [Indexed: 11/19/2022] Open
Abstract
Computer vision (CV) is widely used in the investigation of facial expressions. Applications range from psychological evaluation to neurology, to name just two examples. CV for identifying facial expressions may suffer from several shortcomings: CV provides indirect information about muscle activation, it is insensitive to activations that do not involve visible deformations, such as jaw clenching. Moreover, it relies on high-resolution and unobstructed visuals. High density surface electromyography (sEMG) recordings with soft electrode array is an alternative approach which provides direct information about muscle activation, even from freely behaving humans. In this investigation, we compare CV and sEMG analysis of facial muscle activation. We used independent component analysis (ICA) and multiple linear regression (MLR) to quantify the similarity and disparity between the two approaches for posed muscle activations. The comparison reveals similarity in event detection, but discrepancies and inconsistencies in source identification. Specifically, the correspondence between sEMG and action unit (AU)-based analyses, the most widely used basis of CV muscle activation prediction, appears to vary between participants and sessions. We also show a comparison between AU and sEMG data of spontaneous smiles, highlighting the differences between the two approaches. The data presented in this paper suggests that the use of AU-based analysis should consider its limited ability to reliably compare between different sessions and individuals and highlight the advantages of high-resolution sEMG for facial expression analysis.
Collapse
Affiliation(s)
- Liraz Gat
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Aaron Gerston
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
- X-trodes, Herzelia, Israel
| | - Liu Shikun
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Lilah Inzelberg
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Yael Hanein
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
- X-trodes, Herzelia, Israel
- * E-mail:
| |
Collapse
|
11
|
Abstract
Craniofacial bone defects can result from various disorders, including congenital malformations, tumor resection, infection, severe trauma, and accidents. Successfully regenerating cranial defects is an integral step to restore craniofacial function. However, challenges managing and controlling new bone tissue formation remain. Current advances in tissue engineering and regenerative medicine use innovative techniques to address these challenges. The use of biomaterials, stromal cells, and growth factors have demonstrated promising outcomes in vitro and in vivo. Natural and synthetic bone grafts combined with Mesenchymal Stromal Cells (MSCs) and growth factors have shown encouraging results in regenerating critical-size cranial defects. One of prevalent growth factors is Bone Morphogenetic Protein-2 (BMP-2). BMP-2 is defined as a gold standard growth factor that enhances new bone formation in vitro and in vivo. Recently, emerging evidence suggested that Megakaryocytes (MKs), induced by Thrombopoietin (TPO), show an increase in osteoblast proliferation in vitro and bone mass in vivo. Furthermore, a co-culture study shows mature MKs enhance MSC survival rate while maintaining their phenotype. Therefore, MKs can provide an insight as a potential therapy offering a safe and effective approach to regenerating critical-size cranial defects.
Collapse
Affiliation(s)
- Arbi Aghali
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN 55905, USA;
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47908, USA
| |
Collapse
|
12
|
Le Mau T, Hoemann K, Lyons SH, Fugate JMB, Brown EN, Gendron M, Barrett LF. Professional actors demonstrate variability, not stereotypical expressions, when portraying emotional states in photographs. Nat Commun 2021; 12:5037. [PMID: 34413313 PMCID: PMC8376986 DOI: 10.1038/s41467-021-25352-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 08/02/2021] [Indexed: 02/07/2023] Open
Abstract
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors' facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
Collapse
Affiliation(s)
- Tuan Le Mau
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for High Performance Computing, Social and Cognitive Computing, Connexis North, Singapore
| | - Katie Hoemann
- Department of Psychology, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Sam H Lyons
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer M B Fugate
- Department of Psychology, University of Massachusetts at Dartmouth, Dartmouth, MA, 02747, USA
| | - Emery N Brown
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, USA.
- Massachusetts General Hospital/Martinos Center for Biomedical Imaging, Charlestown, MA, USA.
| |
Collapse
|
13
|
Poltoratski S, Kay K, Finzi D, Grill-Spector K. Holistic face recognition is an emergent phenomenon of spatial processing in face-selective regions. Nat Commun 2021; 12:4745. [PMID: 34362883 PMCID: PMC8346587 DOI: 10.1038/s41467-021-24806-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 07/06/2021] [Indexed: 11/10/2022] Open
Abstract
Spatial processing by receptive fields is a core property of the visual system. However, it is unknown how spatial processing in high-level regions contributes to recognition behavior. As face inversion is thought to disrupt typical holistic processing of information in faces, we mapped population receptive fields (pRFs) with upright and inverted faces in the human visual system. Here we show that in face-selective regions, but not primary visual cortex, pRFs and overall visual field coverage are smaller and shifted downward in response to face inversion. From these measurements, we successfully predict the relative behavioral detriment of face inversion at different positions in the visual field. This correspondence between neural measurements and behavior demonstrates how spatial processing in face-selective regions may enable holistic perception. These results not only show that spatial processing in high-level visual regions is dynamically used towards recognition, but also suggest a powerful approach for bridging neural computations by receptive fields to behavior.
Collapse
Affiliation(s)
| | - Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Dawn Finzi
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
14
|
Kurosumi M, Mizukoshi K, Hongo M, Kamachi MG. Does age-dynamic movement accelerate facial age impression? Perception of age from facial movement: Studies of Japanese women. PLoS One 2021; 16:e0255570. [PMID: 34351981 PMCID: PMC8341570 DOI: 10.1371/journal.pone.0255570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 07/19/2021] [Indexed: 12/03/2022] Open
Abstract
We form impressions of others by observing their constant and dynamically-shifting facial expressions during conversation and other daily life activities. However, conventional aging research has mainly considered the changing characteristics of the skin, such as wrinkles and age-spots, within very limited states of static faces. In order to elucidate the range of aging impressions that we make in daily life, it is necessary to consider the effects of facial movement. This study investigated the effects of facial movement on age impressions. An age perception test using Japanese women as face models was employed to verify the effects of the models' age-dependent facial movements on age impression in 112 participants (all women, aged 20-49 years) as observers. Further, the observers' gaze was analyzed to identify the facial areas of interests during age perception. The results showed that cheek movement affects age impressions, and that the impressions increase depending on the model's age. These findings will facilitate the development of new means of provoking a more youthful impression by approaching anti-aging from a different viewpoint of facial movement.
Collapse
Affiliation(s)
- Motonori Kurosumi
- Graduate School of Informatics, Kogakuin University, Shinjuku, Tokyo, Japan
- POLA Chemical Industries, Inc., Tokyo, Japan
| | | | - Maya Hongo
- POLA Chemical Industries, Inc., Tokyo, Japan
| | | |
Collapse
|
15
|
Cotofana S, Hamade H, Bertucci V, Fagien S, Green JB, Pavicic T, Nikolis A, Lachman N, Hadjab A, Frank K. Change in Rheologic Properties of Facial Soft-Tissue Fillers across the Physiologic Angular Frequency Spectrum. Plast Reconstr Surg 2021; 148:320-331. [PMID: 34398083 DOI: 10.1097/prs.0000000000008188] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND The number of soft-tissue filler injections performed in the United States is constantly increasing and reflects the high demand for enhanced facial and body attractiveness. The objective of the present study was to measure the viscoelastic properties of soft-tissue fillers when subjected to different testing frequencies. The range of tested frequencies represents clinically different facial areas with more [lips (high frequency)] or less [zygomatic arch (low frequency)] soft-tissue movement. METHODS A total of 35 randomly selected hyaluronic acid-based dermal filler products were tested in an independent laboratory for their values of G', G″, tan δ, and G* at angular frequencies between 0.1 and 100 radian/second. RESULTS The results of the objective analyses revealed that the viscoelastic properties of all tested products changed between 0.1 and 100 radian/second angular frequency. Changes in G' ranged from 48.5 to 3116 percent, representing an increase in their initial elastic modulus, whereas changes in G″ ranged from -53.3 percent (i.e., decrease in G″) to 7741 percent (i.e., increase in G″), indicating both an increase and a decrease in their fluidity, respectively. CONCLUSIONS The increase in G' would indicate the transition from a "softer" to a "harder" filler, and the observed decrease in G″ would indicate an increase in the filler's "fluidity." Changes in the frequency of applied shear forces such as those occurring in the medial versus the lateral face will influence the aesthetic outcome of soft-tissue filler injections.
Collapse
Affiliation(s)
- Sebastian Cotofana
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Hassan Hamade
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Vince Bertucci
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Steven Fagien
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Jeremy B Green
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Tatjana Pavicic
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Andreas Nikolis
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Nirusha Lachman
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Abdelbasste Hadjab
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| | - Konstantin Frank
- From the Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science; Division of Anatomy, Department of Medical Education, Albany Medical College; Division of Dermatology, University of Toronto; private practice; Skin Associates of South Florida, Skin Research Institute; the Erevna Innovations, Inc., Clinical Research Unit; Division of Plastic Surgery, McGill University; and the Department for Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilian University
| |
Collapse
|
16
|
Morr M, Lieberz J, Dobbelstein M, Philipsen A, Hurlemann R, Scheele D. Insula reactivity mediates subjective isolation stress in alexithymia. Sci Rep 2021; 11:15326. [PMID: 34321519 PMCID: PMC8319294 DOI: 10.1038/s41598-021-94799-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 07/14/2021] [Indexed: 02/07/2023] Open
Abstract
The risk for developing stress-related disorders is elevated in individuals with high alexithymia, a personality trait characterized by impaired emotional awareness and interpersonal relating. However, it is still unclear how alexithymia alters perceived psychosocial stress and which neurobiological substrates are mechanistically involved. To address this question, we examined freshmen during transition to university, given that this period entails psychosocial stress and frequently initiates psychopathology. Specifically, we used a functional magnetic resonance imaging emotional face matching task to probe emotional processing in 54 participants (39 women) at the beginning of the first year at university and 6 months later. Furthermore, we assessed alexithymia and monitored perceived psychosocial stress and loneliness via questionnaires for six consecutive months. Perceived psychosocial stress significantly increased over time and initial alexithymia predicted subjective stress experiences via enhanced loneliness. On the neural level, alexithymia was associated with lowered amygdala responses to emotional faces, while loneliness correlated with diminished reactivity in the anterior insular and anterior cingulate cortex. Furthermore, insula activity mediated the association between alexithymia and loneliness that predicted perceived psychosocial stress. Our findings are consistent with the notion that alexithymia exacerbates subjective stress via blunted insula reactivity and increased perception of social isolation.
Collapse
Affiliation(s)
- Mitjan Morr
- Division of Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany.
| | - Jana Lieberz
- Division of Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| | - Michael Dobbelstein
- Division of Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, 53127, Bonn, Germany
| | - René Hurlemann
- Department of Psychiatry, School of Medicine and Health Sciences, University of Oldenburg, Hermann-Ehlers-Str. 7, 26129, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, 26129, Oldenburg, Germany
| | - Dirk Scheele
- Division of Medical Psychology, Department of Psychiatry and Psychotherapy, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany.
- Department of Psychiatry, School of Medicine and Health Sciences, University of Oldenburg, Hermann-Ehlers-Str. 7, 26129, Oldenburg, Germany.
| |
Collapse
|
17
|
Schumann NP, Bongers K, Scholle HC, Guntinas-Lichius O. Atlas of voluntary facial muscle activation: Visualization of surface electromyographic activities of facial muscles during mimic exercises. PLoS One 2021; 16:e0254932. [PMID: 34280246 PMCID: PMC8289121 DOI: 10.1371/journal.pone.0254932] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/06/2021] [Indexed: 12/29/2022] Open
Abstract
Complex facial muscle movements are essential for many motoric and emotional functions. Facial muscles are unique in the musculoskeletal system as they are interwoven, so that the contraction of one muscle influences the contractility characteristic of other mimic muscles. The facial muscles act more as a whole than as single facial muscle movements. The standard for clinical and psychosocial experiments to detect these complex interactions is surface electromyography (sEMG). What is missing, is an atlas showing which facial muscles are activated during specific tasks. Based on high-resolution sEMG data of 10 facial muscles of both sides of the face simultaneously recorded during 29 different facial muscle tasks, an atlas visualizing voluntary facial muscle activation was developed. For each task, the mean normalized EMG amplitudes of the examined facial muscles were visualized by colors. The colors were spread between the lowest and highest EMG activity. Gray shades represent no to very low EMG activities, light and dark brown shades represent low to medium EMG activities and red shades represent high to very high EMG activities relatively with respect to each task. The present atlas should become a helpful tool to design sEMG experiments not only for clinical trials and psychological experiments, but also for speech therapy and orofacial rehabilitation studies.
Collapse
Affiliation(s)
- Nikolaus P. Schumann
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Kevin Bongers
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Hans C. Scholle
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- * E-mail:
| |
Collapse
|
18
|
Del Popolo Cristaldi F, Mento G, Sarlo M, Buodo G. Dealing with uncertainty: A high-density EEG investigation on how intolerance of uncertainty affects emotional predictions. PLoS One 2021; 16:e0254045. [PMID: 34197554 PMCID: PMC8248604 DOI: 10.1371/journal.pone.0254045] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 06/19/2021] [Indexed: 01/31/2023] Open
Abstract
Intolerance of uncertainty (IU) can influence emotional predictions, constructed by the brain (generation stage) to prearrange action (implementation stage), and update internal models according to incoming stimuli (updating stage). However, neurocomputational mechanisms by which IU affects emotional predictions are unclear. This high-density EEG study investigated if IU predicted event-related potentials (ERPs) and brain sources activity developing along the stages of emotional predictions, as a function of contextual uncertainty. Thirty-six undergraduates underwent a S1-S2 paradigm, with emotional faces and pictures as S1s and S2s, respectively. Contextual uncertainty was manipulated across three blocks, each with 100%, 75%, or 50% S1-S2 emotional congruency. ERPs, brain sources and their relationship with IU scores were analyzed for each stage. IU did not affect prediction generation. During prediction implementation, higher IU predicted larger Contingent Negative Variation in the 75% block, and lower left anterior cingulate cortex and supplementary motor area activations. During prediction updating, as IU increased P2 to positive S2s decreased, along with P2 and Late Positive Potential in the 75% block, and right orbito-frontal cortex activity to emotional S2s. IU was therefore associated with altered uncertainty assessment and heightened attention deployment during implementation, and to uncertainty avoidance, reduced attention to safety cues and disrupted access to emotion regulation strategies during prediction updating.
Collapse
Affiliation(s)
| | - Giovanni Mento
- Department of General Psychology, University of Padua, Padova, Italy
- Padua Neuroscience Center (PNC), University of Padua, Padova, Italy
| | - Michela Sarlo
- Department of Communication Sciences, Humanities and International Studies, University of Urbino Carlo Bo, Urbino, Italy
| | - Giulia Buodo
- Department of General Psychology, University of Padua, Padova, Italy
| |
Collapse
|
19
|
Lundblad J, Rashid M, Rhodin M, Haubro Andersen P. Effect of transportation and social isolation on facial expressions of healthy horses. PLoS One 2021; 16:e0241532. [PMID: 34086704 PMCID: PMC8177539 DOI: 10.1371/journal.pone.0241532] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 05/19/2021] [Indexed: 11/18/2022] Open
Abstract
Horses have the ability to generate a remarkable repertoire of facial expressions, some of which have been linked to the affective component of pain. This study describes the facial expressions in healthy horses free of pain before and during transportation and social isolation, which are putatively stressful but ordinary management procedures. Transportation was performed in 28 horses by subjecting them to short-term road transport in a horse trailer. A subgroup (n = 10) of these horses was also subjected to short-term social isolation. During all procedures, a body-mounted, remote-controlled heart rate monitor provided continuous heart rate measurements. The horses' heads were video-recorded during the interventions. An exhaustive dataset was generated from the selected video clips of all possible facial action units and action descriptors, time of emergency, duration, and frequency according to the Equine Facial Action Coding System (EquiFACS). Heart rate increased during both interventions (p<0.01), confirming that they caused disruption in sympato-vagal balance. Using the current method for ascribing certain action units (AUs) to specific emotional states in humans and a novel data-driven co-occurrence method, the following facial traits were observed during both interventions: eye white increase (p<0.001), nostril dilator (p<0.001), upper eyelid raiser (p<0.001), inner brow raiser (p = 0.042), tongue show (p<0.001). Increases in 'ear flicker' (p<0.001) and blink frequency (p<0.001) were also seen. These facial actions were used to train a machine-learning classifier to discriminate between the high-arousal interventions and calm horses, which achieved at most 79% accuracy. Most facial features identified correspond well with previous findings on behaviors of stressed horses, for example flared nostrils, repetitive mouth behaviors, increased eye white, tongue show, and ear movements. Several features identified in this study of pain-free horses, such as dilated nostrils, eye white increase, and inner brow raiser, are used as indicators of pain in some face-based pain assessment tools. In order to increase performance parameters in pain assessment tools, the relations between facial expressions of stress and pain should be studied further.
Collapse
Affiliation(s)
- Johan Lundblad
- Department of Anatomy, Physiology and Biochemistry, Swedish University of Agricultural Sciences, Uppsala, Sweden
- * E-mail:
| | - Maheen Rashid
- Department of Computer Science, University of California, Davis, California, United States of America
| | - Marie Rhodin
- Department of Anatomy, Physiology and Biochemistry, Swedish University of Agricultural Sciences, Uppsala, Sweden
| | - Pia Haubro Andersen
- Department of Anatomy, Physiology and Biochemistry, Swedish University of Agricultural Sciences, Uppsala, Sweden
| |
Collapse
|
20
|
Marsh N, Scheele D, Postin D, Onken M, Hurlemann R. Eye-Tracking Reveals a Role of Oxytocin in Attention Allocation Towards Familiar Faces. Front Endocrinol (Lausanne) 2021; 12:629760. [PMID: 34079520 PMCID: PMC8165288 DOI: 10.3389/fendo.2021.629760] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 03/22/2021] [Indexed: 12/14/2022] Open
Abstract
Visual attention directed towards the eye-region of a face emerges rapidly, even before conscious awareness, and regulates social interactions in terms of approach versus avoidance. Current perspectives on the neuroendocrine substrates of this behavioral regulation highlight a role of the peptide hormone oxytocin (OXT), but it remains unclear whether the facilitating effects of OXT vary as a function of facial familiarity. Here, a total of 73 healthy participants was enrolled in an eye-tracking experiment specifically designed to test whether intranasal OXT (24 IU) augments gaze duration toward the eye-region across four different face categories: the participants' own face, the face of their romantic partner, the face of a familiar person (close friend) or an unfamiliar person (a stranger). We found that OXT treatment induced a tendency to spend more time looking into the eyes of familiar persons (partner and close friend) as compared to placebo. This effect was not evident in the self and unfamiliar conditions. Independent of treatment, volunteers scoring high on autistic-like traits (AQ-high) spent less time looking at the eyes of all faces except their partner. Collectively, our results show that the OXT system is involved in facilitating an attentional bias towards the eye region of familiar faces, which convey safety and support, especially in anxious contexts. In contrast, autistic-like traits were associated with reduced attention to the eye region of a face regardless of familiarity and OXT-treatment.
Collapse
Affiliation(s)
- Nina Marsh
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| | - Dirk Scheele
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
- Department of Psychiatry, University Hospital Bonn, Bonn, Germany
| | - Danilo Postin
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| | - Marc Onken
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| | - Rene Hurlemann
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
- Department of Psychiatry, University Hospital Bonn, Bonn, Germany
- Research Center Neurosensory Science, Carl von Ossietzky University Oldenburg, Oldenburg, Germany
| |
Collapse
|
21
|
Mascaró M, Serón FJ, Perales FJ, Varona J, Mas R. Laughter and smiling facial expression modelling for the generation of virtual affective behavior. PLoS One 2021; 16:e0251057. [PMID: 33979375 PMCID: PMC8115814 DOI: 10.1371/journal.pone.0251057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 04/20/2021] [Indexed: 11/23/2022] Open
Abstract
Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character’s appearance.
Collapse
Affiliation(s)
- Miquel Mascaró
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | | | - Francisco J. Perales
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Javier Varona
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
- * E-mail:
| | - Ramon Mas
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| |
Collapse
|
22
|
Novais A, Chatzopoulou E, Chaussain C, Gorin C. The Potential of FGF-2 in Craniofacial Bone Tissue Engineering: A Review. Cells 2021; 10:cells10040932. [PMID: 33920587 PMCID: PMC8073160 DOI: 10.3390/cells10040932] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/10/2021] [Accepted: 04/15/2021] [Indexed: 12/21/2022] Open
Abstract
Bone is a hard-vascularized tissue, which renews itself continuously to adapt to the mechanical and metabolic demands of the body. The craniofacial area is prone to trauma and pathologies that often result in large bone damage, these leading to both aesthetic and functional complications for patients. The "gold standard" for treating these large defects is autologous bone grafting, which has some drawbacks including the requirement for a second surgical site with quantity of bone limitations, pain and other surgical complications. Indeed, tissue engineering combining a biomaterial with the appropriate cells and molecules of interest would allow a new therapeutic approach to treat large bone defects while avoiding complications associated with a second surgical site. This review first outlines the current knowledge of bone remodeling and the different signaling pathways involved seeking to improve our understanding of the roles of each to be able to stimulate or inhibit them. Secondly, it highlights the interesting characteristics of one growth factor in particular, FGF-2, and its role in bone homeostasis, before then analyzing its potential usefulness in craniofacial bone tissue engineering because of its proliferative, pro-angiogenic and pro-osteogenic effects depending on its spatial-temporal use, dose and mode of administration.
Collapse
Affiliation(s)
- Anita Novais
- Pathologies, Imagerie et Biothérapies Orofaciales, Université de Paris, URP2496, 1 rue Maurice Arnoux, 92120 Montrouge, France; (A.N.); (E.C.); (C.C.)
- AP-HP Département d’Odontologie, Services d’odontologie, GH Pitié Salpêtrière, Henri Mondor, Paris Nord, Hôpital Rothschild, Paris, France
| | - Eirini Chatzopoulou
- Pathologies, Imagerie et Biothérapies Orofaciales, Université de Paris, URP2496, 1 rue Maurice Arnoux, 92120 Montrouge, France; (A.N.); (E.C.); (C.C.)
- AP-HP Département d’Odontologie, Services d’odontologie, GH Pitié Salpêtrière, Henri Mondor, Paris Nord, Hôpital Rothschild, Paris, France
- Département de Parodontologie, Université de Paris, UFR Odontologie-Garancière, 75006 Paris, France
| | - Catherine Chaussain
- Pathologies, Imagerie et Biothérapies Orofaciales, Université de Paris, URP2496, 1 rue Maurice Arnoux, 92120 Montrouge, France; (A.N.); (E.C.); (C.C.)
- AP-HP Département d’Odontologie, Services d’odontologie, GH Pitié Salpêtrière, Henri Mondor, Paris Nord, Hôpital Rothschild, Paris, France
| | - Caroline Gorin
- Pathologies, Imagerie et Biothérapies Orofaciales, Université de Paris, URP2496, 1 rue Maurice Arnoux, 92120 Montrouge, France; (A.N.); (E.C.); (C.C.)
- AP-HP Département d’Odontologie, Services d’odontologie, GH Pitié Salpêtrière, Henri Mondor, Paris Nord, Hôpital Rothschild, Paris, France
- Correspondence: ; Tel./Fax: +33-(0)1-5807-6724
| |
Collapse
|
23
|
Abstract
People make judgments of others based on appearance, and these inferences can affect social interactions. Although the importance of facial appearance in these judgments is well established, the impact of the body morphology remains unclear. Specifically, it is unknown whether experimentally varied body morphology has an impact on perception of threat in others. In two preregistered experiments (N = 250), participants made judgments of perceived threat of body stimuli of varying morphology, both in the absence (Experiment 1) and presence (Experiment 2) of facial information. Bodies were perceived as more threatening as they increased in mass with added musculature and portliness, and less threatening as they increased in emaciation. The impact of musculature endured even in the presence of faces, although faces contributed more to the overall threat judgment. The relative contributions of the faces and bodies seemed to be driven by discordance, such that threatening faces exerted the most influence when paired with non-threatening bodies, and vice versa. This suggests that the faces and bodies were not perceived as entirely independent and separate components. Overall, these findings suggest that body morphology plays an important role in perceived threat and may bias real-world judgments.
Collapse
Affiliation(s)
- Terence J. McElvaney
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
- * E-mail:
| | - Magda Osman
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
| | - Isabelle Mareschal
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
24
|
Hodges-Simeon CR, Albert G, Richardson GB, McHale TS, Weinberg SM, Gurven M, Gaulin SJC. Was facial width-to-height ratio subject to sexual selection pressures? A life course approach. PLoS One 2021; 16:e0240284. [PMID: 33711068 PMCID: PMC7954343 DOI: 10.1371/journal.pone.0240284] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 01/16/2021] [Indexed: 11/23/2022] Open
Abstract
Sexual selection researchers have traditionally focused on adult sex differences; however, the schedule and pattern of sex-specific ontogeny can provide insights unobtainable from an exclusive focus on adults. Recently, it has been debated whether facial width-to-height ratio (fWHR; bi-zygomatic breadth divided by midface height) is a human secondary sexual characteristic (SSC). Here, we review current evidence, then address this debate using ontogenetic evidence, which has been under-explored in fWHR research. Facial measurements were collected from 3D surface images of males and females aged 3 to 40 (Study 1; US European-descent, n = 2449), and from 2D photographs of males and females aged 7 to 21 (Study 2; Bolivian Tsimane, n = 179), which were used to calculate three fWHR variants (which we call fWHRnasion, fWHRstomion, and fWHRbrow) and two other common facial masculinity ratios (facial width-to-lower-face-height ratio, fWHRlower, and cheekbone prominence). We test whether the observed pattern of facial development exhibits patterns indicative of SSCs, i.e., differential adolescent growth in either male or female facial morphology leading to an adult sex difference. Results showed that only fWHRlower exhibited both adult sex differences as well as the classic pattern of ontogeny for SSCs-greater lower-face growth in male adolescents relative to females. fWHRbrow was significantly wider among both pre- and post-pubertal males in the Bolivian Tsimane sample; post-hoc analyses revealed that the effect was driven by large sex differences in brow height, with females having higher placed brows than males across ages. In both samples, all fWHR measures were inversely associated with age; that is, human facial growth is characterized by greater relative elongation in the mid-face and lower face relative to facial width. This trend continues even into middle adulthood. BMI was also a positive predictor of most of the ratios across ages, with greater BMI associated with wider faces. Researchers collecting data on fWHR should target fWHRlower and fWHRbrow and should control for both age and BMI. Researchers should also compare ratio approaches with multivariate techniques, such as geometric morphometrics, to examine whether the latter have greater utility for understanding the evolution of facial sexual dimorphism.
Collapse
Affiliation(s)
- Carolyn R Hodges-Simeon
- Department of Anthropology, Boston University, Boston, Massachusetts, United States of America
| | - Graham Albert
- Department of Anthropology, Boston University, Boston, Massachusetts, United States of America
| | - George B Richardson
- School of Human Services, University of Cincinnati, Cincinnati, Ohio, United States of America
| | - Timothy S McHale
- Department of Anthropology, Boston University, Boston, Massachusetts, United States of America
- Department of Anthropology and Museum Studies, Central Washington University, Ellensburg, Washington, United States of America
| | - Seth M Weinberg
- Center for Craniofacial and Dental Genetics, Department of Oral Biology, School of Dental Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Anthropology, Dietrich School of Arts and Sciences, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Michael Gurven
- Department of Anthropology, University of California, Santa Barbara, California, United States of America
| | - Steven J C Gaulin
- Department of Anthropology, University of California, Santa Barbara, California, United States of America
| |
Collapse
|
25
|
Żelaźniewicz A, Nowak-Kornicka J, Zbyrowska K, Pawłowski B. Predicted reproductive longevity and women's facial attractiveness. PLoS One 2021; 16:e0248344. [PMID: 33690719 PMCID: PMC7946180 DOI: 10.1371/journal.pone.0248344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 02/22/2021] [Indexed: 11/19/2022] Open
Abstract
Physical attractiveness has been shown to reflect women's current fecundity level, allowing a man to choose a potentially more fertile partner in mate choice context. However, women vary not only in terms of fecundity level at reproductive age but also in reproductive longevity, both influencing a couple's long-term reproductive success. Thus, men should choose their potential partner not only based on cues of current fecundity but also on cues of reproductive longevity, and both may be reflected in women's appearance. In this study, we investigated if a woman's facial attractiveness at reproductive age reflects anti-Müllerian hormone (AMH) level, a hormone predictor of age at menopause, similarly as it reflects current fecundity level, estimated with estradiol level (E2). Face photographs of 183 healthy women (Mage = 28.49, SDage = 2.38), recruited between 2nd - 4th day of the menstrual cycle, were assessed by men in terms of attractiveness. Women's health status was evaluated based on C-reactive protein level and biochemical blood test. Serum AMH and E2 were measured. The results showed that facial attractiveness was negatively correlated with AMH level, a hormone indicator of expected age at menopause, and positively with E2, indicator of current fecundity level, also when controlled for potential covariates (testosterone, BMI, age). This might result from biological trade-off between high fecundity and the length of reproductive lifespan in women and greater adaptive importance of high fecundity at reproductive age compared to the length of reproductive lifespan.
Collapse
|
26
|
Barnett BO, Brooks JA, Freeman JB. Stereotypes bias face perception via orbitofrontal-fusiform cortical interaction. Soc Cogn Affect Neurosci 2021; 16:302-314. [PMID: 33270131 PMCID: PMC7943359 DOI: 10.1093/scan/nsaa165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/11/2020] [Accepted: 12/02/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that social-conceptual associations, such as stereotypes, can influence the visual representation of faces and neural pattern responses in ventral temporal cortex (VTC) regions, such as the fusiform gyrus (FG). Current models suggest that this social-conceptual impact requires medial orbitofrontal cortex (mOFC) feedback signals during perception. Backward masking can disrupt such signals, as it is a technique known to reduce functional connectivity between VTC regions and regions outside VTC. During functional magnetic resonance imaging (fMRI), subjects passively viewed masked and unmasked faces, and following the scan, perceptual biases and stereotypical associations were assessed. Multi-voxel representations of faces across the VTC, and in the FG and mOFC, reflected stereotypically biased perceptions when faces were unmasked, but this effect was abolished when faces were masked. However, the VTC still retained the ability to process masked faces and was sensitive to their categorical distinctions. Functional connectivity analyses confirmed that masking disrupted mOFC-FG connectivity, which predicted a reduced impact of stereotypical associations in the FG. Taken together, our findings suggest that the biasing of face representations in line with stereotypical associations does not arise from intrinsic processing within the VTC and FG alone, but instead it depends in part on top-down feedback from the mOFC during perception.
Collapse
Affiliation(s)
- Benjamin O Barnett
- Division of Psychology and Language Sciences, University College London, London WC1E 6BT, UK
| | - Jeffrey A Brooks
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Jonathan B Freeman
- Department of Psychology, New York University, New York, NY 10003, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
27
|
Li H, Wang N, Ding X, Yang X, Gao X. Adaptively Learning Facial Expression Representation via C-F Labels and Distillation. IEEE Trans Image Process 2021; 30:2016-2028. [PMID: 33439841 DOI: 10.1109/tip.2021.3049955] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Facial expression recognition is of significant importance in criminal investigation and digital entertainment. Under unconstrained conditions, existing expression datasets are highly class-imbalanced, and the similarity between expressions is high. Previous methods tend to improve the performance of facial expression recognition through deeper or wider network structures, resulting in increased storage and computing costs. In this paper, we propose a new adaptive supervised objective named AdaReg loss, re-weighting category importance coefficients to address this class imbalance and increasing the discrimination power of expression representations. Inspired by human beings' cognitive mode, an innovative coarse-fine (C-F) labels strategy is designed to guide the model from easy to difficult to classify highly similar representations. On this basis, we propose a novel training framework named the emotional education mechanism (EEM) to transfer knowledge, composed of a knowledgeable teacher network (KTN) and a self-taught student network (STSN). Specifically, KTN integrates the outputs of coarse and fine streams, learning expression representations from easy to difficult. Under the supervision of the pre-trained KTN and existing learning experience, STSN can maximize the potential performance and compress the original KTN. Extensive experiments on public benchmarks demonstrate that the proposed method achieves superior performance compared to current state-of-the-art frameworks with 88.07% on RAF-DB, 63.97% on AffectNet and 90.49% on FERPlus.
Collapse
|
28
|
Pavlovič O, Fiala V, Kleisner K. Environmental convergence in facial preferences: a cross-group comparison of Asian Vietnamese, Czech Vietnamese, and Czechs. Sci Rep 2021; 11:550. [PMID: 33436663 PMCID: PMC7804147 DOI: 10.1038/s41598-020-79623-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 12/09/2020] [Indexed: 11/08/2022] Open
Abstract
It has been demonstrated that sociocultural environment has a significant impact on human behavior. This contribution focuses on differences in the perception of attractiveness of European (Czech) faces as rated by Czechs of European origin, Vietnamese persons living in the Czech Republic and Vietnamese who permanently reside in Vietnam. We investigated whether attractiveness judgments and preferences for facial sex-typicality and averageness in Vietnamese who grew up and live in the Czech Republic are closer to the judgements and preferences of Czech Europeans or to those of Vietnamese born and residing in Vietnam. We examined the relative contribution of sexual shape dimorphism and averageness to the perception of facial attractiveness across all three groups of raters. Czech Europeans, Czech Vietnamese, and Asian Vietnamese raters of both sexes rated facial portraits of 100 Czech European participants (50 women and 50 men, standardized, non-manipulated) for attractiveness. Taking Czech European ratings as a standard for Czech facial attractiveness, we showed that Czech Vietnamese assessments of attractiveness were closer to this standard than assessments by the Asian Vietnamese. Among all groups of raters, facial averageness positively correlated with perceived attractiveness, which is consistent with the "average is attractive" hypothesis. A marginal impact of sexual shape dimorphism on attractiveness rating was found only in Czech European male raters: neither Czech Vietnamese nor Asian Vietnamese raters of either sex utilized traits associated with sexual shape dimorphism as a cue of attractiveness. We thus conclude that Vietnamese people permanently living in the Czech Republic converge with Czechs of Czech origin in perceptions of facial attractiveness and that this population adopted some but not all Czech standards of beauty.
Collapse
Affiliation(s)
- Ondřej Pavlovič
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Vinicna 7, Prague, 128 44, Czech Republic
| | - Vojtěch Fiala
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Vinicna 7, Prague, 128 44, Czech Republic
| | - Karel Kleisner
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Vinicna 7, Prague, 128 44, Czech Republic.
| |
Collapse
|
29
|
Correia-Caeiro C, Holmes K, Miyabe-Nishiwaki T. Extending the MaqFACS to measure facial movement in Japanese macaques (Macaca fuscata) reveals a wide repertoire potential. PLoS One 2021; 16:e0245117. [PMID: 33411716 PMCID: PMC7790396 DOI: 10.1371/journal.pone.0245117] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 12/23/2020] [Indexed: 02/01/2023] Open
Abstract
Facial expressions are complex and subtle signals, central for communication and emotion in social mammals. Traditionally, facial expressions have been classified as a whole, disregarding small but relevant differences in displays. Even with the same morphological configuration different information can be conveyed depending on the species. Due to a hardwired processing of faces in the human brain, humans are quick to attribute emotion, but have difficulty in registering facial movement units. The well-known human FACS (Facial Action Coding System) is the gold standard for objectively measuring facial expressions, and can be adapted through anatomical investigation and functional homologies for cross-species systematic comparisons. Here we aimed at developing a FACS for Japanese macaques, following established FACS methodology: first, we considered the species' muscular facial plan; second, we ascertained functional homologies with other primate species; and finally, we categorised each independent facial movement into Action Units (AUs). Due to similarities in the rhesus and Japanese macaques' facial musculature, the MaqFACS (previously developed for rhesus macaques) was used as a basis to extend the FACS tool to Japanese macaques, while highlighting the morphological and appearance changes differences between the two species. We documented 19 AUs, 15 Action Descriptors (ADs) and 3 Ear Action Units (EAUs) in Japanese macaques, with all movements of MaqFACS found in Japanese macaques. New movements were also observed, indicating a slightly larger repertoire than in rhesus or Barbary macaques. Our work reported here of the MaqFACS extension for Japanese macaques, when used together with the MaqFACS, comprises a valuable objective tool for the systematic and standardised analysis of facial expressions in Japanese macaques. The MaqFACS extension for Japanese macaques will now allow the investigation of the evolution of communication and emotion in primates, as well as contribute to improving the welfare of individuals, particularly in captivity and laboratory settings.
Collapse
Affiliation(s)
| | - Kathryn Holmes
- School of Psychology, University of Lincoln, Lincoln, Lincolnshire, United Kingdom
| | | |
Collapse
|
30
|
Karl S, Boch M, Zamansky A, van der Linden D, Wagner IC, Völter CJ, Lamm C, Huber L. Exploring the dog-human relationship by combining fMRI, eye-tracking and behavioural measures. Sci Rep 2020; 10:22273. [PMID: 33335230 PMCID: PMC7747637 DOI: 10.1038/s41598-020-79247-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 12/04/2020] [Indexed: 11/08/2022] Open
Abstract
Behavioural studies revealed that the dog-human relationship resembles the human mother-child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver's face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.
Collapse
Affiliation(s)
- Sabrina Karl
- Clever Dog Lab, Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna, 1210, Vienna, Austria.
| | - Magdalena Boch
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, 1010, Vienna, Austria
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, 1090, Vienna, Austria
| | - Anna Zamansky
- Information Systems Department, University of Haifa, 3498838, Haifa, Israel
| | - Dirk van der Linden
- Department of Computer and Information Sciences, Northumbria University, Newcastle-upon-Tyne, NE1 8ST, UK
| | - Isabella C Wagner
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, 1010, Vienna, Austria
| | - Christoph J Völter
- Clever Dog Lab, Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna, 1210, Vienna, Austria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, 1010, Vienna, Austria
| | - Ludwig Huber
- Clever Dog Lab, Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna, University of Vienna, 1210, Vienna, Austria
| |
Collapse
|
31
|
Watson DM, Brown BB, Johnston A. A data-driven characterisation of natural facial expressions when giving good and bad news. PLoS Comput Biol 2020; 16:e1008335. [PMID: 33112846 PMCID: PMC7652307 DOI: 10.1371/journal.pcbi.1008335] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 11/09/2020] [Accepted: 09/12/2020] [Indexed: 11/18/2022] Open
Abstract
Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.
Collapse
Affiliation(s)
- David M. Watson
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
- * E-mail:
| | - Ben B. Brown
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
32
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
Affiliation(s)
- N Apurva Ratan Murty
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Santani Teng
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David Beeler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Anna Mynick
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
33
|
Zhou Y, Ghassemi P, Chen M, McBride D, Casamento JP, Pfefer TJ, Wang Q. Clinical evaluation of fever-screening thermography: impact of consensus guidelines and facial measurement location. J Biomed Opt 2020; 25:JBO-200193R. [PMID: 32921005 PMCID: PMC7486803 DOI: 10.1117/1.jbo.25.9.097002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 08/27/2020] [Indexed: 05/07/2023]
Abstract
SIGNIFICANCE Infrared thermographs (IRTs) have been used for fever screening during infectious disease epidemics, including severe acute respiratory syndrome, Ebola virus disease, and coronavirus disease 2019 (COVID-19). Although IRTs have significant potential for human body temperature measurement, the literature indicates inconsistent diagnostic performance, possibly due to wide variations in implemented methodology. A standardized method for IRT fever screening was recently published, but there is a lack of clinical data demonstrating its impact on IRT performance. AIM Perform a clinical study to assess the diagnostic effectiveness of standardized IRT-based fever screening and evaluate the effect of facial measurement location. APPROACH We performed a clinical study of 596 subjects. Temperatures from 17 facial locations were extracted from thermal images and compared with oral thermometry. Statistical analyses included calculation of receiver operating characteristic (ROC) curves and area under the curve (AUC) values for detection of febrile subjects. RESULTS Pearson correlation coefficients for IRT-based and reference (oral) temperatures were found to vary strongly with measurement location. Approaches based on maximum temperatures in either inner canthi or full-face regions indicated stronger discrimination ability than maximum forehead temperature (AUC values of 0.95 to 0.97 versus 0.86 to 0.87, respectively) and other specific facial locations. These values are markedly better than the vast majority of results found in prior human studies of IRT-based fever screening. CONCLUSION Our findings provide clinical confirmation of the utility of consensus approaches for fever screening, including the use of inner canthi temperatures, while also indicating that full-face maximum temperatures may provide an effective alternate approach.
Collapse
Affiliation(s)
- Yangling Zhou
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
- University of Maryland, Department of Mechanical Engineering, Baltimore County, Maryland, United States
| | - Pejman Ghassemi
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Michelle Chen
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
- Johns Hopkins University, Department of Chemical and Biomolecular Engineering, Baltimore, Maryland, United States
| | - David McBride
- University of Maryland, University Health Center, College Park, Maryland, United States
| | - Jon P. Casamento
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - T. Joshua Pfefer
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
| | - Quanzeng Wang
- Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, Maryland, United States
- Address all correspondence to Quanzeng Wang, E-mail:
| |
Collapse
|
34
|
Skov ST, Bünger C, Li H, Vigh-Larsen M, Rölfing JD. Lengthening of magnetically controlled growing rods caused minimal pain in 25 children: pain assessment with FPS-R, NRS, and r-FLACC. Spine Deform 2020; 8:763-770. [PMID: 32170659 DOI: 10.1007/s43390-020-00096-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 02/25/2020] [Indexed: 11/24/2022]
Abstract
STUDY DESIGN Descriptive case series. OBJECTIVE The aim of the study is to investigate the pain associated with magnetically controlled growing rod (MCGR) lengthening procedures. MCGRs have gained popularity because they offer non-surgical lengthening procedures in early-onset scoliosis (EOS) instead of semi-annual open surgery elongations with traditional growing rods. Many aspects of MCGR treatment have been investigated, but pain in conjunction with distraction is only sparsely described in the literature. METHODS Pain intensity was assessed in 25 EOS patients before, during and after MCGR lengthening procedures in an outpatient setup. They underwent at least two (range 2-16) lengthening procedures prior to this study. The pain intensity was estimated using patient-reported Faces Pain Scale (FPS-R), caregiver-reported pain numeric rating scale (NRS), and NRS and revised Face, Legs, Activity, Cry, Consolability scale (r-FLACC) by two medically trained observers. The inter-rater reliability and correlation between instruments were analyzed. RESULTS 23 of 25 EOS patients (8- to 16-year old) with mixed etiology were able to self-report pain. The average pain intensity was mild: median 1 (range 0-6) on all four instruments on a 0-to-10 scale. Afterward, 22/25 patients (88%) were completely pain free and the remaining 3 patients had a pain score of 1. MCGR stalling (i.e. clunking) was encountered in 14/25 (56%) of the patients without impact on the pain intensity. CONCLUSIONS The average maximum pain intensities during the lengthening procedures were mild and pain ceased within few minutes. The inter-rater reliability was good to excellent for NRS and r-FLACC, and there were high correlations between all the four pain instruments, indicating high criterion validity. LEVEL OF EVIDENCE Level IV, case series.
Collapse
Affiliation(s)
- Simon Toftgaard Skov
- Department of Orthopaedics, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99, 8200, Aarhus, Denmark.
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
- Elective Surgery Centre, Silkeborg Regional Hospital, Silkeborg, Denmark.
| | - Cody Bünger
- Department of Orthopaedics, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99, 8200, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Haisheng Li
- Department of Orthopaedics, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99, 8200, Aarhus, Denmark
| | - Marianne Vigh-Larsen
- Department of Surgery & Anesthesiology, Aarhus University Hospital, Aarhus, Denmark
| | - Jan Duedal Rölfing
- Department of Orthopaedics, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99, 8200, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- MidtSim, Central Denmark Region, Aarhus, Denmark
| |
Collapse
|
35
|
Rosenberg N, Ihme K, Lichev V, Sacher J, Rufer M, Grabe HJ, Kugel H, Pampel A, Lepsien J, Kersting A, Villringer A, Suslow T. Alexithymia and automatic processing of facial emotions: behavioral and neural findings. BMC Neurosci 2020; 21:23. [PMID: 32471365 PMCID: PMC7257227 DOI: 10.1186/s12868-020-00572-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 05/20/2020] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Alexithymia is a personality trait characterized by difficulties identifying and describing feelings, an externally oriented style of thinking, and a reduced inclination to imagination. Previous research has shown deficits in the recognition of emotional facial expressions in alexithymia and reductions of brain responsivity to emotional stimuli. Using an affective priming paradigm, we investigated automatic perception of facial emotions as a function of alexithymia at the behavioral and neural level. In addition to self-report scales, we applied an interview to assess alexithymic tendencies. RESULTS During 3 T fMRI scanning, 49 healthy individuals judged valence of neutral faces preceded by briefly shown happy, angry, fearful, and neutral facial expressions. Alexithymia was assessed using the 20-Item Toronto Alexithymia Scale (TAS-20), the Bermond-Vorst Alexithymia Questionnaire (BVAQ) and the Toronto Structured Interview for Alexithymia (TSIA). As expected, only negative correlations were found between alexithymic features and affective priming. The global level of self-reported alexithymia (as assessed by the TAS-20 and the BVAQ) was found to be related to less affective priming owing to angry faces. At the facet level, difficulties identifying feelings, difficulties analyzing feelings, and impoverished fantasy (as measured by the BVAQ) were correlated with reduced affective priming due to angry faces. Difficulties identifying feelings (BVAQ) correlated also with reduced affective priming due to fearful faces and reduced imagination (TSIA) was related to decreased affective priming due to happy faces. There was only one significant correlation between alexithymia dimensions and automatic brain response to masked facial emotions: TAS-20 alexithymia correlated with heightened brain response to masked happy faces in superior and medial frontal areas. CONCLUSIONS Our behavioral results provide evidence that alexithymic features are related in particular to less sensitivity for covert facial expressions of anger. The perceptual alterations could reflect impaired automatic recognition or integration of social anger signals into judgemental processes and might contribute to the problems in interpersonal relationships associated with alexithymia. Our findings suggest that self-report measures of alexithymia may have an advantage over interview-based tests as research tools in the field of emotion perception at least in samples of healthy individuals characterized by rather low levels of alexithymia.
Collapse
Affiliation(s)
- Nicole Rosenberg
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Semmelweisstrasse 10, 04103 Leipzig, Germany
| | - Klas Ihme
- Institute of Transportation Systems, German Aerospace Center, Lilienthalplatz 7, 38108 Brunswick, Germany
| | - Vladimir Lichev
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Semmelweisstrasse 10, 04103 Leipzig, Germany
| | - Julia Sacher
- Department of Neurology, Max-Planck-Institute of Human Cognitive and Brain Sciences, Stephanstraße 1, 04103 Leipzig, Germany
- Clinic of Cognitive Neurology, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany
| | - Michael Rufer
- Department of Psychiatry, Psychotherapy and Psychosomatics, University Hospital Zurich, University of Zurich, Militärstrasse 8, 8021 Zurich, Switzerland
| | - Hans Jörgen Grabe
- Department of Psychiatry, University Medicine of Greifswald, Ellernholzstraße 1-2, 17475 Greifswald, Germany
| | - Harald Kugel
- Department of Clinical Radiology, University of Münster, Albert-Schweitzer-Campus 1, 48149 Münster, Germany
| | - André Pampel
- Nuclear Magnetic Resonance Unit, Max-Planck-Institute of Human Cognitive and Brain Sciences, Stephanstraße 1, 04103 Leipzig, Germany
| | - Jöran Lepsien
- Nuclear Magnetic Resonance Unit, Max-Planck-Institute of Human Cognitive and Brain Sciences, Stephanstraße 1, 04103 Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Semmelweisstrasse 10, 04103 Leipzig, Germany
| | - Arno Villringer
- Department of Neurology, Max-Planck-Institute of Human Cognitive and Brain Sciences, Stephanstraße 1, 04103 Leipzig, Germany
- Clinic of Cognitive Neurology, University of Leipzig, Liebigstrasse 18, 04103 Leipzig, Germany
| | - Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig, Semmelweisstrasse 10, 04103 Leipzig, Germany
| |
Collapse
|
36
|
Matsuyoshi D, Watanabe K. People have modest, not good, insight into their face recognition ability: a comparison between self-report questionnaires. Psychol Res 2020; 85:1713-1723. [PMID: 32436049 PMCID: PMC8211616 DOI: 10.1007/s00426-020-01355-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 05/06/2020] [Indexed: 11/29/2022]
Abstract
Whether people have insight into their face recognition ability has been intensely debated in recent studies using self-report measures. Although some studies showed people’s good insight, other studies found the opposite. The discrepancy might be caused by the difference in the questionnaire used and/or the bias induced using an extreme group such as suspected prosopagnosics. To resolve this issue, we examined the relationship between the two representative self-report face recognition questionnaires (Survey, N = 855) and then the extent to which the questionnaires differ in their relationship with face recognition performance (Experiment, N = 180) in normal populations, which do not include predetermined extreme groups. We found a very strong correlation (r = 0.82), a dominant principal component (explains > 90% of the variance), and comparable reliability between the questionnaires. Although these results suggest a strong common factor underlying them, the residual variance is not negligible (33%). Indeed, the follow-up experiment showed that both questionnaires have significant but moderate correlations with actual face recognition performance, and that the correlation was stronger for the Kennerknecht’s questionnaire (r = − 0.38) than for the PI20 (r = − 0.23). These findings not only suggest people’s modest insight into their face recognition ability, but also urge researchers and clinicians to carefully assess whether a questionnaire is suitable for estimating an individual’s face recognition ability.
Collapse
Affiliation(s)
- Daisuke Matsuyoshi
- Faculty of Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo, 169-8555, Japan.
- Araya Inc., ARK Mori Bldg, 1-12-32 Akasaka ARK Hills, Minato, Tokyo, 107-6090, Japan.
- Quantum Life Science and Functional Brain Imaging Research, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, 4-9-1 Anagawa, Inage, Chiba, 263-8555, Japan.
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo, 169-8555, Japan
- Art and Design, University of New South Wales, Sydney, Australia
| |
Collapse
|
37
|
Abstract
Your phone scans your face to unlock its screen. A social media app offers suggestions of friends to tag in photos. Airline check-in systems verify who you are as you stare into a camera. These are just a few examples of how facial recognition technology (FRT) is now ubiquitous in everyday lives. The industries of law enforcement, Internet search engines, marketing, and security have long harnessed FRT, but the technology is becoming increasingly explored in the health care setting, where its potential benefit-and risks-are much greater.
Collapse
|
38
|
Maurer D, Ghloum JK, Gibson LC, Watson MR, Chen LM, Akins K, Enns JT, Hensch TK, Werker JF. Reduced perceptual narrowing in synesthesia. Proc Natl Acad Sci U S A 2020; 117:10089-10096. [PMID: 32321833 PMCID: PMC7211996 DOI: 10.1073/pnas.1914668117] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Synesthesia is a neurologic trait in which specific inducers, such as sounds, automatically elicit additional idiosyncratic percepts, such as color (thus "colored hearing"). One explanation for this trait-and the one tested here-is that synesthesia results from unusually weak pruning of cortical synaptic hyperconnectivity during early perceptual development. We tested the prediction from this hypothesis that synesthetes would be superior at making discriminations from nonnative categories that are normally weakened by experience-dependent pruning during a critical period early in development-namely, discrimination among nonnative phonemes (Hindi retroflex /d̪a/ and dental /ɖa/), among chimpanzee faces, and among inverted human faces. Like the superiority of 6-mo-old infants over older infants, the synesthetic groups were significantly better than control groups at making all the nonnative discriminations across five samples and three testing sites. The consistent superiority of the synesthetic groups in making discriminations that are normally eliminated during infancy suggests that residual cortical connectivity in synesthesia supports changes in perception that extend beyond the specific synesthetic percepts, consistent with the incomplete pruning hypothesis.
Collapse
Affiliation(s)
- Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada L8S 4K1;
| | - Julian K Ghloum
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada L8S 4K1
| | - Laura C Gibson
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada L8S 4K1
| | - Marcus R Watson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada V6T 1Z4
| | - Lawrence M Chen
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada V6T 1Z4
| | - Kathleen Akins
- Department of Philosophy, Simon Fraser University, Burnaby, BC, Canada V5A 1S6
| | - James T Enns
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada V6T 1Z4
| | - Takao K Hensch
- Center for Brain Science, Department of Molecular Cellular Biology, Harvard University, Cambridge, MA 02138
- Canadian Institute for Advanced Research, Toronto, ON, Canada M5G 1M1
- International Research Center for Neurointelligence, University of Tokyo Institutes for Advanced Study, Bunkyo-ku, Tokyo, Japan 113-0033
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada V6T 1Z4
- Canadian Institute for Advanced Research, Toronto, ON, Canada M5G 1M1
| |
Collapse
|
39
|
Farnell DJJ, Richmond S, Galloway J, Zhurov AI, Pirttiniemi P, Heikkinen T, Harila V, Matthews H, Claes P. Multilevel principal components analysis of three-dimensional facial growth in adolescents. Comput Methods Programs Biomed 2020; 188:105272. [PMID: 31865094 DOI: 10.1016/j.cmpb.2019.105272] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 11/19/2019] [Accepted: 12/10/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of age-related facial shape changes across different populations and sexes requires new multivariate tools to disentangle different sources of variations present in 3D facial images. Here we wish to use a multivariate technique called multilevel principal components analysis (mPCA) to study three-dimensional facial growth in adolescents. METHODS These facial shapes were captured for Welsh and Finnish subjects (both male and female) at multiple ages from 12 to 17 years old (i.e., repeated-measures data). 1000 "dense" 3D points were defined regularly for each shape by using a deformable template via "meshmonk" software. A three-level model was used here, namely: level 1 (sex/ethnicity); level 2, all "subject" variations excluding sex, ethnicity, and age; and level 3, age. The technicalities underpinning the mPCA method are presented in Appendices. RESULTS Eigenvalues via mPCA predicted that: level 1 (ethnicity/sex) contained 7.9% of variation; level 2 contained 71.5%; and level 3 (age) contained 20.6%. The results for the eigenvalues via mPCA followed a similar pattern to those results of single-level PCA. Results for modes of variation made sense, where effects due to ethnicity, sex, and age were reflected in modes at appropriate levels of the model. Standardised scores at level 1 via mPCA showed much stronger differentiation between sex and ethnicity groups than results of single-level PCA. Results for standardised scores from both single-level PCA and mPCA at level 3 indicated that females had different average "trajectories" with respect to these scores than males, which suggests that facial shape matures in different ways for males and females. No strong evidence of differences in growth patterns between Finnish and Welsh subjects was observed. CONCLUSIONS mPCA results agree with existing research relating to the general process of facial changes in adolescents with respect to age quoted in the literature. They support previous evidence that suggests that males demonstrate larger changes and for a longer period of time compared to females, especially in the lower third of the face. These calculations are therefore an excellent initial test that multivariate multilevel methods such as mPCA can be used to describe such age-related changes for "dense" 3D point data.
Collapse
Affiliation(s)
- D J J Farnell
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom.
| | - S Richmond
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - J Galloway
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - A I Zhurov
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - P Pirttiniemi
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - T Heikkinen
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - V Harila
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - H Matthews
- Medical Imaging Research Center, UZ Leuven, 3000 Leuven, Belgium; Department of Human Genetics, KU Leuven, 3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium; Facial Sciences Research Group, Murdoch Children's Research Institute, Melbourne, Australia; Department of Paediatrics, University of Melbourne, Melbourne, Australia
| | - P Claes
- Medical Imaging Research Center, UZ Leuven, 3000 Leuven, Belgium; Department of Human Genetics, KU Leuven, 3000 Leuven, Belgium; Department of Electrical Engineering, ESAT/PSI, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
40
|
Gonzalez-Franco M, Steed A, Hoogendyk S, Ofek E. Using Facial Animation to Increase the Enfacement Illusion and Avatar Self-Identification. IEEE Trans Vis Comput Graph 2020; 26:2023-2029. [PMID: 32070973 DOI: 10.1109/tvcg.2020.2973075] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Through avatar embodiment in Virtual Reality (VR) we can achieve the illusion that an avatar is substituting our body: the avatar moves as we move and we see it from a first person perspective. However, self-identification, the process of identifying a representation as being oneself, poses new challenges because a key determinant is that we see and have agency in our own face. Providing control over the face is hard with current HMD technologies because face tracking is either cumbersome or error prone. However, limited animation is easily achieved based on speaking. We investigate the level of avatar enfacement, that is believing that a picture of a face is one's own face, with three levels of facial animation: (i) one in which the facial expressions of the avatars are static, (ii) one in which we implement lip-sync motion and (iii) one in which the avatar presents lip-sync plus additional facial animations, with blinks, designed by a professional animator. We measure self-identification using a face morphing tool that morphs from the face of the participant to the face of a gender matched avatar. We find that self-identification on avatars can be increased through pre-baked animations even when these are not photorealistic nor look like the participant.
Collapse
|
41
|
Abstract
This study proposed to investigate the thermal properties and subjective thermal discomfort of five virtual reality headsets, and their relationships. Twenty-seven university students used each of the five headsets for 45 min. Microclimate temperature and relative humidity were measured by miniature dataloggers. Infrared thermography was used to measure temperature distribution on the contact points between user's face and the headsets. Participants reported subjective thermal discomfort associated with using each headset. The average microclimate temperature and relative humidity increased by 7.8 °C and 3.5% respectively after headset use. Overall subjective thermal discomfort increased along with duration of use and came primarily from the display. Applying the linear mixed-effects model showed that subjective thermal discomfort is positively correlated with duration of use, microclimate temperature, relative humidity and display coverage area. Conversely, thermal discomfort is negatively correlated with total coverage area, with microclimate temperature acting as the most significant contributing factor. The headsets were ranked by pairing the objective measurements with subjective evaluations.
Collapse
Affiliation(s)
- Zihao Wang
- School of Design, Hunan University, China.
| | - Renke He
- School of Design, Hunan University, China.
| | - Ke Chen
- School of Design, Hunan University, China.
| |
Collapse
|
42
|
Stower RE, Lee AJ, McIntosh TL, Sidari MJ, Sherlock JM, Dixson BJW. Mating Strategies and the Masculinity Paradox: How Relationship Context, Relationship Status, and Sociosexuality Shape Women's Preferences for Facial Masculinity and Beardedness. Arch Sex Behav 2020; 49:809-820. [PMID: 31016490 DOI: 10.1007/s10508-019-1437-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 03/02/2019] [Accepted: 03/03/2019] [Indexed: 05/12/2023]
Abstract
According to the dual mating strategy model, in short-term mating contexts women should forego paternal investment qualities in favor of mates with well-developed secondary sexual characteristics and dominant behavioral displays. We tested whether this model explains variation in women's preferences for facial masculinity and beardedness in male faces. Computer-generated composites that had been morphed to appear ± 50% masculine were rated by 671 heterosexual women (M age = 31.72 years, SD = 6.43) for attractiveness when considering them as a short-term partner, long-term partner, a co-parent, or a friend. They then completed the Revised Sociosexual Inventory (SOI-R) to determine their sexual openness on dimensions of desire, behavior, and attitudes. Results showed that women's preferences were strongest for average facial masculinity, followed by masculinized faces, with feminized faces being least attractive. In contrast to past research, facial masculinity preferences were stronger when judging for co-parenting partners than for short-term mates. Facial masculinity preferences were also positively associated with behavioral SOI, negatively with desire, and were unrelated to global or attitudinal SOI. Women gave higher ratings for full beards than clean-shaven faces. Preferences for beards were higher for co-parenting and long-term relationships than short-term relationships, although these differences were not statistically significant. Preferences for facial hair were positively associated with global and attitudinal SOI, but were unrelated to behavioral SOI and desire. Although further replication is necessary, our findings indicate that sexual openness is associated with women's preferences for men's facial hair and suggest variation in the association between sociosexuality and women's facial masculinity preferences.
Collapse
Affiliation(s)
- Rebecca E Stower
- School of Psychology, University of Queensland, St Lucia, QLD, 4072, Australia
| | - Anthony J Lee
- Division of Psychology, University of Stirling, Stirling, Scotland, UK
| | - Toneya L McIntosh
- School of Psychology, University of Queensland, St Lucia, QLD, 4072, Australia
| | - Morgan J Sidari
- School of Psychology, University of Queensland, St Lucia, QLD, 4072, Australia
| | - James M Sherlock
- School of Psychology, University of Queensland, St Lucia, QLD, 4072, Australia
| | - Barnaby J W Dixson
- School of Psychology, University of Queensland, St Lucia, QLD, 4072, Australia.
| |
Collapse
|
43
|
Jeong D, Kim BG, Dong SY. Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition. Sensors (Basel) 2020; 20:s20071936. [PMID: 32235662 PMCID: PMC7180996 DOI: 10.3390/s20071936] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/24/2020] [Accepted: 03/27/2020] [Indexed: 11/16/2022]
Abstract
Understanding a person's feelings is a very important process for the affective computing. People express their emotions in various ways. Among them, facial expression is the most effective way to present human emotional status. We propose efficient deep joint spatiotemporal features for facial expression recognition based on the deep appearance and geometric neural networks. We apply three-dimensional (3D) convolution to extract spatial and temporal features at the same time. For the geometric network, 23 dominant facial landmarks are selected to express the movement of facial muscle through the analysis of energy distribution of whole facial landmarks.We combine these features by the designed joint fusion classifier to complement each other. From the experimental results, we verify the recognition accuracy of 99.21%, 87.88%, and 91.83% for CK+, MMI, and FERA datasets, respectively. Through the comparative analysis, we show that the proposed scheme is able to improve the recognition accuracy by 4% at least.
Collapse
|
44
|
Markett S, Jawinski P, Kirsch P, Gerchen MF. Specific and segregated changes to the functional connectome evoked by the processing of emotional faces: A task-based connectome study. Sci Rep 2020; 10:4822. [PMID: 32179856 PMCID: PMC7076018 DOI: 10.1038/s41598-020-61522-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2019] [Accepted: 02/28/2020] [Indexed: 12/20/2022] Open
Abstract
The functional connectome is organized into several separable intrinsic connectivity networks (ICNs) that are thought to be the building blocks of the mind. However, it is currently not well understood how these networks are engaged by emotionally salient information, and how such engagement fits into emotion theories. The current study assessed how ICNs respond during the processing of angry and fearful faces in a large sample (N = 843) and examined how connectivity changes relate to the ICNs. All ICNs were modulated by emotional faces and showed functional interactions, a finding which is in line with the "theory of constructed emotions" that assumes that basic emotion do not arise from separable ICNs but from their interplay. We further identified a set of brain regions whose connectivity changes during the tasks suggest a special role as "affective hubs" in the brain. While hubs were located in all ICNs, we observed high selectivity for the amygdala within the subcortical network, a finding which also fits into "primary emotion" theory. The topology of hubs corresponded closely to a set of brain regions that has been implicated in anxiety disorders, pointing towards a clinical relevance of the present findings. The present data are the most comprehensive mapping of connectome-wide changes in functionally connectivity evoked by an affective processing task thus far and support two competing views on how emotions are represented in the brain, suggesting that the connectome paradigm might help with unifying the two ideas.
Collapse
Affiliation(s)
| | | | - Peter Kirsch
- Central Institute of Mental Health, University of Heidelberg/Medical Faculty Mannheim, Mannheim, Germany
- Bernstein Center for Computational Neuroscience Heidelberg/Mannheim, Mannheim, Germany
| | - Martin F Gerchen
- Central Institute of Mental Health, University of Heidelberg/Medical Faculty Mannheim, Mannheim, Germany
- Bernstein Center for Computational Neuroscience Heidelberg/Mannheim, Mannheim, Germany
| |
Collapse
|
45
|
Khurshid A, Scharcanski J. An Adaptive Face Tracker with Application in Yawning Detection. Sensors (Basel) 2020; 20:s20051494. [PMID: 32182814 PMCID: PMC7085723 DOI: 10.3390/s20051494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 02/23/2020] [Accepted: 02/24/2020] [Indexed: 11/16/2022]
Abstract
In this work, we propose an adaptive face tracking scheme that compensates for possible face tracking errors during its operation. The proposed scheme is equipped with a tracking divergence estimate, which allows to detect early and minimize the face tracking errors, so the tracked face is not missed indefinitely. When the estimated face tracking error increases, a resyncing mechanism based on Constrained Local Models (CLM) is activated to reduce the tracking errors by re-estimating the tracked facial features' locations (e.g., facial landmarks). To improve the Constrained Local Model (CLM) feature search mechanism, a Weighted-CLM (W-CLM) is proposed and used in resyncing. The performance of the proposed face tracking method is evaluated in the challenging context of driver monitoring using yawning detection and talking video datasets. Furthermore, an improvement in a yawning detection scheme is proposed. Experiments suggest that our proposed face tracking scheme can obtain a better performance than comparable state-of-the-art face tracking methods and can be successfully applied in yawning detection.
Collapse
Affiliation(s)
- Aasim Khurshid
- Sidia Instituto de Ciencia e tecnologia, Amazonas, Manaus 69055-035, Brazil
- Instituto de Informatica, UFRGS, Porto Alegre 9500, Brazil;
- Correspondence:
| | | |
Collapse
|
46
|
Guzzi F, De Bortoli L, Molina RS, Marsi S, Carrato S, Ramponi G. Distillation of an End-to-End Oracle for Face Verification and Recognition Sensors. Sensors (Basel) 2020; 20:s20051369. [PMID: 32131494 PMCID: PMC7085744 DOI: 10.3390/s20051369] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 02/25/2020] [Accepted: 02/28/2020] [Indexed: 11/16/2022]
Abstract
Face recognition functions are today exploited through biometric sensors in many applications, from extended security systems to inclusion devices; deep neural network methods are reaching in this field stunning performances. The main limitation of the deep learning approach is an inconvenient relation between the accuracy of the results and the needed computing power. When a personal device is employed, in particular, many algorithms require a cloud computing approach to achieve the expected performances; other algorithms adopt models that are simple by design. A third viable option consists of model (oracle) distillation. This is the most intriguing among the compression techniques since it permits to devise of the minimal structure that will enforce the same I/O relation as the original model. In this paper, a distillation technique is applied to a complex model, enabling the introduction of fast state-of-the-art recognition capabilities on a low-end hardware face recognition sensor module. Two distilled models are presented in this contribution: the former can be directly used in place of the original oracle, while the latter incarnates better the end-to-end approach, removing the need for a separate alignment procedure. The presented biometric systems are examined on the two problems of face verification and face recognition in an open set by using well-agreed training/testing methodologies and datasets.
Collapse
Affiliation(s)
- Francesco Guzzi
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
- Elettra Sincrotrone Trieste, Scientific Computing, 34149 Basovizza, Italy
- Correspondence:
| | - Luca De Bortoli
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
| | - Romina Soledad Molina
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
- The Abdus Salam International Centre for Theoretical Physics (ICTP), Multidisciplinary Laboratory, 34151 Trieste, Italy
| | - Stefano Marsi
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
| | - Sergio Carrato
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
| | - Giovanni Ramponi
- Engineering and Architecture department, Image Processing Laboratory (IPL), University of Trieste, 34127 Trieste, Italy
| |
Collapse
|
47
|
Wu C, Zhen Z, Huang L, Huang T, Liu J. COMT-Polymorphisms Modulated Functional Profile of the Fusiform Face Area Contributes to Face-Specific Recognition Ability. Sci Rep 2020; 10:2134. [PMID: 32034175 PMCID: PMC7005682 DOI: 10.1038/s41598-020-58747-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 01/15/2020] [Indexed: 12/03/2022] Open
Abstract
Previous studies have shown that face-specific recognition ability (FRA) is heritable; however, the neural basis of this heritability is unclear. Candidate gene studies have suggested that the catechol-O-methyltransferase (COMT) rs4680 polymorphism is related to face perception. Here, using a partial least squares (PLS) method, we examined the multivariate association between 12 genotypes of 4 COMT polymorphisms (rs6269-rs4633-rs4818-rs4680) and multimodal MRI phenotypes in the human fusiform face area (FFA), which selectively responds to face stimuli, in 338 Han Chinese adults (mean age 20.45 years; 135 males). The MRI phenotypes included gray matter volume (GMV), resting-state fractional amplitude of low-frequency fluctuations (fALFF), and face-selective blood-oxygen-level-dependent (BOLD) responses (FS). We found that the first COMT-variant component (PLS1) was positively associated with the FS but negatively associated with the fALFF in the FFA. Moreover, participants with the COMT heterozygous-HEA-haplotype showed higher PLS1 FFA-MRI scores, which were positively associated with the FRA in an old/new face recognition task, than those with the COMT homozygous HEA haplotype and HEA non-carriers, suggesting that individuals with an appropriate (intermediate) level of dopamine activity in the FFA might have better FRA. In summary, our study provides empirical evidence for the genetic and neural basis for the heritability of face recognition and informs the formation of neural module functional specificity.
Collapse
Affiliation(s)
- Chao Wu
- School of Nursing, Peking University Health Science Centre, Beijing, 100191, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| | - Lijie Huang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Taicheng Huang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Jia Liu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
48
|
Abstract
Millennials, defined as the generation of individuals born between 1981 and 1996, have emerged as one of the leading patient demographics seeking minimally invasive cosmetic procedures. Worldwide, millennials are more likely to consider preventative treatments compared to any other age-group. The three most popular minimally invasive facial procedures in this demographic include botulinum toxin, dermal fillers (eg, hyaluronic acid, calcium hydroxylapatite, facial fat-fillers), and microdermabrasion. Given their impact on the expanding aesthetic medicine market and their favorable disposition towards cosmetic procedures, it is necessary for dermatologists and cosmetic providers to understand their motivations and perspectives. While some research studies have elicited the opinions of millennials on social issues, education, and technology, there is a paucity of literature on millennials' impressions, opinions, and perceptions of aesthetic procedures. As a generation that has been reshaping the culture of healthcare delivery and encouraging the innovation of products and procedures with their unique values and perspectives, accounting for their beliefs and fostering a better understanding of their experiences will promote an elevation in the quality of their care.
Collapse
|
49
|
Abstract
AbobotulinumtoxinA (Dysport) has a long history as a safe and effective treatment option for aesthetic rejuvenation. One of the key measures of botulinum toxin efficacy is the persistence of clinically meaningful results. The duration of efficacy depends on different factors, many of which can be controlled by the clinician to better achieve their desired results. In this review, we discuss how dose, individual patient variation, and injection technique affect the duration of botulinum toxins. Increased duration may result from increased dose or more precise placement of the toxin in the muscle. The varying anatomy and behavior of patients can affect duration as well. Measures of duration in clinical studies vary, but both a 1-grade improvement on the glabellar line severity scale and patient-reported outcomes are key measures. The clinical effects of Dysport can last up to 5 months, and patients in Dysport clinical studies remained satisfied with treatment for up to 6 months. Dysport has a legacy of safety, efficacy, and high subject satisfaction demonstrated through studies and clinical experience. Building on that legacy by correctly dosing the subject, properly accounting for the individual subject anatomy and behavior, and using specific injection techniques can help ensure that your patients have the longest lasting results.
Collapse
Affiliation(s)
- Hermine Warren
- Hermine Warren, DNP, APRN, CANS, CNM, is an advanced practice RN, GenNow faculty and a GAIN trainer for Galderma. She is also PALETTE faculty. She is at Facialogy Medical, Inc., Encino, CA
- Kim Welch, BSN, RN, CANS, is an aesthetics specialist GenNow faculty and a GAIN trainer for Galderma. She is at Esperance Aesthetic Wellness, Coppell, TX
- Sarah Coquis-Knezek, PhD, is an associate medical affairs advisor at Galderma Laboratories, L.P., Fort Worth, TX
| | - Kim Welch
- Hermine Warren, DNP, APRN, CANS, CNM, is an advanced practice RN, GenNow faculty and a GAIN trainer for Galderma. She is also PALETTE faculty. She is at Facialogy Medical, Inc., Encino, CA
- Kim Welch, BSN, RN, CANS, is an aesthetics specialist GenNow faculty and a GAIN trainer for Galderma. She is at Esperance Aesthetic Wellness, Coppell, TX
- Sarah Coquis-Knezek, PhD, is an associate medical affairs advisor at Galderma Laboratories, L.P., Fort Worth, TX
| | - Sarah Coquis-Knezek
- Hermine Warren, DNP, APRN, CANS, CNM, is an advanced practice RN, GenNow faculty and a GAIN trainer for Galderma. She is also PALETTE faculty. She is at Facialogy Medical, Inc., Encino, CA
- Kim Welch, BSN, RN, CANS, is an aesthetics specialist GenNow faculty and a GAIN trainer for Galderma. She is at Esperance Aesthetic Wellness, Coppell, TX
- Sarah Coquis-Knezek, PhD, is an associate medical affairs advisor at Galderma Laboratories, L.P., Fort Worth, TX
| |
Collapse
|
50
|
Lin XX, Sun YB, Wang YZ, Fan L, Wang X, Wang N, Luo F, Wang JY. Ambiguity Processing Bias Induced by Depressed Mood Is Associated with Diminished Pleasantness. Sci Rep 2019; 9:18726. [PMID: 31822749 PMCID: PMC6904491 DOI: 10.1038/s41598-019-55277-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Accepted: 11/21/2019] [Indexed: 11/30/2022] Open
Abstract
Depressed individuals are biased to perceive, interpret, and judge ambiguous cues in a negative/pessimistic manner. Depressed mood can induce and exacerbate these biases, but the underlying mechanisms are not fully understood. We theorize that depressed mood can bias ambiguity processing by altering one's subjective emotional feelings (e.g. pleasantness/unpleasantness) of the cues. This is because when there is limited objective information, individuals often rely on subjective feelings as a source of information for cognitive processing. To test this theory, three groups (induced depression vs. spontaneous depression vs. neutral) were tested in the Judgement Bias Task (JBT), a behavioral assay of ambiguity processing bias. Subjective pleasantness/unpleasantness of cues was measured by facial electromyography (EMG) from the zygomaticus major (ZM, "smiling") and from the corrugator supercilii (CS, "frowning") muscles. As predicted, induced sad mood (vs. neutral mood) yielded a negative bias with a magnitude comparable to that in a spontaneous depressed mood. The facial EMG data indicates that the negative judgement bias induced by depressed mood was associated with a decrease in ZM reactivity (i.e., diminished perceived pleasantness of cues). Our results suggest that depressed mood may bias ambiguity processing by affecting the reward system.
Collapse
Affiliation(s)
- Xiao-Xiao Lin
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Ya-Bin Sun
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yu-Zheng Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lu Fan
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xin Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Center for Education and Research, Beijing, China
| | - Ning Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Fei Luo
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jin-Yan Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|