1
|
Namba S, Sato W, Namba S, Nomiya H, Shimokawa K, Osumi M. Development of the RIKEN database for dynamic facial expressions with multiple angles. Sci Rep 2023; 13:21785. [PMID: 38066065 PMCID: PMC10709572 DOI: 10.1038/s41598-023-49209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.
Collapse
Affiliation(s)
- Shushi Namba
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan.
| | - Wataru Sato
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
| | - Saori Namba
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Hiroki Nomiya
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto, 6068585, Japan
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| |
Collapse
|
2
|
Kamal M, Möbius M, Bartella AK, Lethaus B. Perception of aesthetic features after surgical treatment of craniofacial malformations by observers of the same age: An eye-tracking study. J Craniomaxillofac Surg 2023; 51:708-715. [PMID: 37813772 DOI: 10.1016/j.jcms.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/11/2023] [Accepted: 09/30/2023] [Indexed: 10/11/2023] Open
Abstract
The aim of this study is to evaluate where exactly children and adolescents of the same group look when they interact with each other, and attempt to record and analyse the data recorded by eye-tracking technology. MATERIALS AND METHODS 60 subjects participated in the study, evenly divided into three age categories of 20 each in pre-school/primary school age (5-9 years), early adolescence (10-14 years) and late adolescence/transition to adulthood (15-19 years). Age groups were matched and categorized to be used both for creating the picture series and testing. Photographs of patients with both unilateral and bilateral cleft lip and palate were used to create the series of images which consisted of a total of 15 photos, 5 of which were photos of patients with surgically treated cleft deformity and 10 control photos with healthy faces, that were presented in random order. Using the eye-tracking module, the data on "area of first view" (area of initial attention), "area with longest view" (area of sustained attention), "time until view in this area" (time of initial attention) and "frequency of view in each area" (time of sustained attention) were calculated. RESULTS Across all groups, there was no significant difference for the individual regions for the parameters of initial attention (area of first view), while the time until first fixation of one of the AOIs (time until view in this area) was significant for all facial regions. A predictable path of the facial scan is abandoned when secondary facial deformities are present and attention is focused more on the region of an existing deformity, which are the nose and mouth regions. CONCLUSIONS There are significant differences in both male and female participants' viewing of faces with and without secondary cleft deformity. While in the age group of the younger test persons it was still the mouth region that received special attention from the male viewers, this shifted in the male test persons of the middle age group to the nose region, which was fixed significantly more often and faster. In the female participants, the mouth and nose regions were each looked at for twice as long compared to the healthy faces, making both the mouth and the nose region are in the focus of observation.
Collapse
Affiliation(s)
- Mohammad Kamal
- Department of Surgical Sciences, College of Dentistry, Health Sciences Center, Kuwait University, Safat, Kuwait.
| | - Marianne Möbius
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| | - Alexander K Bartella
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| | - Bernd Lethaus
- Department of Oral and Maxillofacial Surgery, Leipzig University Hospital, Leipzig, Germany.
| |
Collapse
|
3
|
Todd E, Subendran S, Wright G, Guo K. Emotion category-modulated interpretation bias in perceiving ambiguous facial expressions. Perception 2023; 52:695-711. [PMID: 37427421 PMCID: PMC10510303 DOI: 10.1177/03010066231186936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/22/2023] [Indexed: 07/11/2023]
Abstract
In contrast to prototypical facial expressions, we show less perceptual tolerance in perceiving vague expressions by demonstrating an interpretation bias, such as more frequent perception of anger or happiness when categorizing ambiguous expressions of angry and happy faces that are morphed in different proportions and displayed under high- or low-quality conditions. However, it remains unclear whether this interpretation bias is specific to emotion categories or reflects a general negativity versus positivity bias and whether the degree of this bias is affected by the valence or category of two morphed expressions. These questions were examined in two eye-tracking experiments by systematically manipulating expression ambiguity and image quality in fear- and sad-happiness faces (Experiment 1) and by directly comparing anger-, fear-, sadness-, and disgust-happiness expressions (Experiment 2). We found that increasing expression ambiguity and degrading image quality induced a general negativity versus positivity bias in expression categorization. The degree of negativity bias, the associated reaction time and face-viewing gaze allocation were further manipulated by different expression combinations. It seems that although we show a viewing condition-dependent bias in interpreting vague facial expressions that display valence-contradicting expressive cues, it appears that the perception of these ambiguous expressions is guided by a categorical process similar to that involved in perceiving prototypical expressions.
Collapse
|
4
|
Vicente-Querol MA, Fernández-Caballero A, González P, González-Gualda LM, Fernández-Sotos P, Molina JP, García AS. Effect of Action Units, Viewpoint and Immersion on Emotion Recognition Using Dynamic Virtual Faces. Int J Neural Syst 2023; 33:2350053. [PMID: 37746831 DOI: 10.1142/s0129065723500533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.
Collapse
Affiliation(s)
- Miguel A Vicente-Querol
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| | - Antonio Fernández-Caballero
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
| | - Pascual González
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
| | - Luz M González-Gualda
- Servicio de Salud Mental, Complejo Hospitalario, Universitario de Albacete, Albacete 02004, Spain
| | - Patricia Fernández-Sotos
- Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III, Madrid 28029, Spain
- Servicio de Salud Mental, Complejo Hospitalario, Universitario de Albacete, Albacete 02004, Spain
| | - José P Molina
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| | - Arturo S García
- Instituto de Investigación en Informática, Universidad de Castilla-La Mancha, Albacete 02071, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete 02071, Spain
| |
Collapse
|
5
|
Li S, Hao B, Dang W, He W, Luo W. Prioritized Identification of Fearful Eyes during the Attentional Blink Is Not Automatic. Brain Sci 2023; 13:1392. [PMID: 37891761 PMCID: PMC10605468 DOI: 10.3390/brainsci13101392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 09/19/2023] [Accepted: 09/28/2023] [Indexed: 10/29/2023] Open
Abstract
The eye region conveys considerable information regarding an individual's emotions, motivations, and intentions during interpersonal communication. Evidence suggests that the eye regions of an individual expressing emotions can capture attention more rapidly than the eye regions of an individual in a neutral affective state. However, how attentional resources affect the processing of emotions conveyed by the eye regions remains unclear. Accordingly, the present study employed a dual-target rapid serial visual presentation task: happy, neutral, or fearful eye regions were presented as the second target, with a temporal lag between two targets of 232 or 696 ms. Participants completed two tasks successively: Task 1 was to identify which species the upright eye region they had seen belonged to, and Task 2 was to identify what emotion was conveyed in the upright eye region. The behavioral results showed that the accuracy for fearful eye regions was lower than that for neutral eye regions under the condition of limited attentional resources; however, accuracy differences across the three types of eye regions did not reach significance under the condition of adequate attentional resources. These findings indicate that preferential processing of fearful expressions is not automatic but is modulated by available attentional resources.
Collapse
Affiliation(s)
- Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; (S.L.); (W.H.)
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Bin Hao
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; (S.L.); (W.H.)
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wei Dang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; (S.L.); (W.H.)
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; (S.L.); (W.H.)
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; (S.L.); (W.H.)
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| |
Collapse
|
6
|
Sueur C, Piermattéo A, Pelé M. Eye image effect in the context of pedestrian safety: a French questionnaire study. F1000Res 2023; 11:218. [PMID: 37822956 PMCID: PMC10562793 DOI: 10.12688/f1000research.76062.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/29/2023] [Indexed: 10/13/2023] Open
Abstract
Human behavior is influenced by the presence of others, which scientists also call 'the audience effect'. The use of social control to produce more cooperative behaviors may positively influence road use and safety. This study uses an online questionnaire to test how eyes images affect the behavior of pedestrians when crossing a road. Different eyes images of men, women and a child with different facial expressions -neutral, friendly and angry- were presented to participants who were asked what they would feel by looking at these images before crossing a signalized road. Participants completed a questionnaire of 20 questions about pedestrian behaviors (PBQ). The questionnaire was received by 1,447 French participants, 610 of whom answered the entire questionnaire. Seventy-one percent of participants were women, and the mean age was 35 ± 14 years. Eye images give individuals the feeling they are being observed at 33%, feared at 5% and surprised at 26%, and thus seem to indicate mixed results about avoiding crossing at the red light. The expressions shown in the eyes are also an important factor: feelings of being observed increased by about 10-15% whilst feelings of being scared or inhibited increased by about 5% as the expression changed from neutral to friendly to angry. No link was found between the results of our questionnaire and those of the Pedestrian Behavior Questionnaire (PBQ). This study shows that the use of eye images could reduce illegal crossings by pedestrians, and is thus of key interest as a practical road safety tool. However, the effect is limited and how to increase this nudge effect needs further consideration.
Collapse
Affiliation(s)
- Cédric Sueur
- Institut Universitaire de France, Paris, France
- IPHC, UMR7178, Université de Strasbourg, CNRS, Strasbourg, France
| | | | - Marie Pelé
- ETHICS EA7446, Lille Catholic University, Lille, France
| |
Collapse
|
7
|
Brunet NM. Affective evaluation of consciously perceived emotional faces reveals a "correct attribution effect". Front Psychol 2023; 14:1146107. [PMID: 37303898 PMCID: PMC10249471 DOI: 10.3389/fpsyg.2023.1146107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 04/06/2023] [Indexed: 06/13/2023] Open
Abstract
The strength of the affective priming effect is influenced by various factors, including the duration of the prime. Surprisingly, short-duration primes that are around the threshold for conscious awareness typically result in stronger effects compared to long-duration primes. The misattribution effect theory suggest that subliminal primes do not provide sufficient cognitive processing time for the affective feeling to be attributed to the prime. Instead, the neutral target being evaluated is credited for the affective experience. In everyday social interactions, we shift our gaze from one face to another, typically contemplating each face for only a few seconds. It is reasonable to assume that no affective priming takes place during such interactions. To investigate whether this is indeed the case, participants were asked to rate the valence of faces displayed one by one. Each face image simultaneously served as both a target (primed by the previous trial) and a prime (for the next trial). Depending on the participant's response time, images were typically displayed for about 1-2 s. As predicted by the misattribution effect theory, neutral targets were not affected by positive affective priming. However, non-neutral targets showed a robust priming effect, with emotional faces being perceived as even more negative or positive when the previously seen face was emotionally congruent. These results suggest that a "correct attribution effect" modulates how we perceive faces, continuously impacting our social interactions. Given the importance of faces in social communication, these findings have wide-ranging implications.
Collapse
Affiliation(s)
- Nicolas M. Brunet
- Department of Psychology and Neuroscience, Millsaps College, Jackson, MS, United States
- Department of Psychology, California State University, San Bernardino, San Bernardino, CA, United States
| |
Collapse
|
8
|
Sun J, Dong T, Liu P. Holistic processing and visual characteristics of regulated and spontaneous expressions. J Vis 2023; 23:6. [PMID: 36912592 PMCID: PMC10019490 DOI: 10.1167/jov.23.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023] Open
Abstract
The rapid and efficient recognition of facial expressions is crucial for adaptive behaviors, and holistic processing is one of the critical processing methods to achieve this adaptation. Therefore, this study integrated the effects and attentional characteristics of the authenticity of facial expressions on holistic processing. The results show that both regulated and spontaneous expressions were processed holistically. However, the spontaneous expression details did not indicate typical holistic processing, with the congruency effect observed equally for aligned and misaligned conditions. No significant difference between the two expressions was observed in terms of reaction times and eye movement characteristics (i.e., total fixation duration, fixation counts, and first fixation duration). These findings suggest that holistic processing strategies differ between the two expressions. Nevertheless, the difference was not reflected in attentional engagement.
Collapse
Affiliation(s)
- Juncai Sun
- School of Psychology, Qufu Normal University, Qufu, China.,
| | - Tiantian Dong
- Department of Psychology, Shanghai Normal University, Shanghai, China.,
| | - Ping Liu
- Department of Psychology, Shaoxing University, Shaoxing, China.,
| |
Collapse
|
9
|
Chu YH, Chou LW, Lin HH, Chang KM. Consumer Visual and Affective Bias for Soothing Dolls. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:2396. [PMID: 36767763 PMCID: PMC9916300 DOI: 10.3390/ijerph20032396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/21/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
Soothing dolls are becoming increasingly popular in a society with a lot of physical and mental stress. Many products are also combined with soothing dolls to stimulate consumers' desire for impulse buying. However, there is no research on the relationship between consumers' purchasing behavior, consumers' preference for soothing dolls, and visual preference. The purpose of this study was to examine the possible factors that affect the emotional and visual preferences of soothing dolls. Two local stores' sales lists were used to extract three different types of dolls. The 2D and 3D versions of these three dolls were used. Subjective emotional preferences were examined by the self-assessment manikin (SAM) scale, with 5-point Likert scales for valence and arousal factors. An eye tracker was used to examine visual preferences, both before and after positive/negative emotion stimulation by the International Affective Picture System (IAPS). There were 37 subjects involved, with an age range of 20-28 years. The experimental results show that the average valence/arousal scores for 2D/3D dolls were (3.80, 3.74) and (2.65, 2.68), respectively. There was no statistical difference, but both 2D and 3D pictures had high valence scores. Eye tracker analysis revealed no gaze difference in visual preference between 2D and 3D dolls. After negative emotional picture stimulation, the observation time of the left-side doll decreased from 2.307 (std 0.905) to 1.947 (std 1.038) seconds, p < 0.001; and that of the right-side picture increased from 1.898 (std 0.907) to 2.252 (std 1.046) seconds, p < 0.001. The average observation time ratio of the eye on the 3D doll was 40.6%, higher than that on the 2D doll (34.3%, p = 0.02). Soothing dolls may be beneficial for emotion relaxation. Soothing dolls always have high valence features according to the SAM evaluation's measurement. Moreover, this study proposes a novel research model using an eye-tracker and the SAM for the SOR framework.
Collapse
Affiliation(s)
- Yu-Hsiu Chu
- Department of Physical Therapy, Graduate Institute of Rehabilitation Science, China Medical University, Taichung 406040, Taiwan
| | - Li-Wei Chou
- Department of Physical Therapy, Graduate Institute of Rehabilitation Science, China Medical University, Taichung 406040, Taiwan
- Department of Physical Medicine and Rehabilitation, Asia University Hospital, Asia University, Taichung 413505, Taiwan
- Department of Physical Medicine and Rehabilitation, China Medical University Hospital, Taichung 404332, Taiwan
| | - He-Hui Lin
- Department of Digital Media Design, Asia University, Taichung 413505, Taiwan
| | - Kang-Ming Chang
- Department of Digital Media Design, Asia University, Taichung 413505, Taiwan
- Department of Computer Science and Information Engineering, Asia University, Taichung 413505, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 406040, Taiwan
| |
Collapse
|
10
|
Suslow T, Lemster A, Koelkebeck K, Kersting A. Interpersonal problems and recognition of facial emotions in healthy individuals. Front Psychiatry 2023; 14:1139051. [PMID: 37139331 PMCID: PMC10149975 DOI: 10.3389/fpsyt.2023.1139051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Background Recognition of emotions in faces is important for successful social interaction. Results from previous research based on clinical samples suggest that difficulties in identifying threat-related or negative emotions can go along with interpersonal problems. The present study examined whether associations between interpersonal difficulties and emotion decoding ability can be found in healthy individuals. Our analysis was focused on two main dimensions of interpersonal problems: agency (social dominance) and communion (social closeness). Materials and methods We constructed an emotion recognition task with facial expressions depicting six basic emotions (happiness, surprise, anger, disgust, sadness, and fear) in frontal and profile view, which was administered to 190 healthy adults (95 women) with a mean age of 23.9 years (SD = 3.8) along with the Inventory of Interpersonal Problems, measures of negative affect and verbal intelligence. The majority of participants were university students (80%). Emotion recognition accuracy was assessed using unbiased hit rates. Results Negative correlations were observed between interpersonal agency and recognition of facial anger and disgust that were independent of participants' gender and negative affect. Interpersonal communion was not related to recognition of facial emotions. Discussion Poor identification of other people's facial signals of anger and disgust might be a factor contributing to interpersonal problems with social dominance and intrusiveness. Anger expressions signal goal obstruction and proneness to engage in conflict whereas facial disgust indicates a request to increase social distance. The interpersonal problem dimension of communion appears not to be linked to the ability to recognize emotions from facial expressions.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
- *Correspondence: Thomas Suslow,
| | - Alexander Lemster
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Katja Koelkebeck
- Department of Psychiatry and Psychotherapy, LVR-Hospital Essen, Institute and Hospital of the University of Duisburg-Essen, Essen, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
11
|
Liu K, Zhang C, Zhang Y, Wang X, Guo Y, Wang X. Perception of the Nose and Lower Face Before and After Orthognathic Surgery in Subjects with Dento-maxillofacial Deformities: An Eye-Tracking Study. Aesthetic Plast Surg 2022; 46:1731-1737. [PMID: 35451608 DOI: 10.1007/s00266-022-02854-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 02/19/2022] [Indexed: 11/01/2022]
Abstract
BACKGROUND Dento-maxillofacial deformities are often associated with nasal deviation, and patients often complain of nasal deviation after orthognathic surgery. This study aimed to quantitatively evaluate the facial visual attention given to dento-maxillofacial deformities accompanying nasal deviation from the perspective of patients and determine whether orthognathic surgery could alter this outcome. METHODS The scanning paths of 137 patients were recorded using an eye-tracking device; recordings were made while the patients viewed images of dento-maxillofacial deformities associated with various degrees of nasal deviation before or after orthognathic surgery. Visual attention focused on the lower face and nose was analyzed. RESULTS When viewing postoperative faces, the participants focused more visual attention on noses and less on the lower face than they did on preoperative faces. Interestingly, for preoperative faces, nasal deviation could significantly increase participants' visual attention to the lower face, and visual attention to noses was significantly increased when noses were deviated 12°, while for postoperative faces, a nasal deviation of 4° or more was associated with a significant increase in participants' visual attention to the nose. CONCLUSIONS Patients tended to focus their visual attention on the lower region of preoperative faces and ignored nose irregularities. Orthognathic surgery can alter visual attention, shifting it from the lower face to the nose, and a deviation of 4° or more could be a potential concern for patients. Clinicians must inform patients preoperatively about preexisting nasal deviations, which can guide surgical planning and help manage patient expectations. LEVEL OF EVIDENCE IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
|
12
|
Gonçalves A, Hattori Y, Adachi I. Staring death in the face: chimpanzees' attention towards conspecific skulls and the implications of a face module guiding their behaviour. ROYAL SOCIETY OPEN SCIENCE 2022; 9:210349. [PMID: 35345434 PMCID: PMC8941397 DOI: 10.1098/rsos.210349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 02/11/2022] [Indexed: 06/14/2023]
Abstract
Chimpanzees exhibit a variety of behaviours surrounding their dead, although much less is known about how they respond towards conspecific skeletons. We tested chimpanzees' visual attention to images of conspecific and non-conspecific stimuli (cat/chimp/dog/rat), shown simultaneously in four corners of a screen in distinct orientations (frontal/diagonal/lateral) of either one of three types (faces/skulls/skull-shaped stones). Additionally, we compared their visual attention towards chimpanzee-only stimuli (faces/skulls/skull-shaped stones). Lastly, we tested their attention towards specific regions of chimpanzee skulls. We theorized that chimpanzee skulls retaining face-like features would be perceived similarly to chimpanzee faces and thus be subjected to similar biases. Overall, supporting our hypotheses, the chimpanzees preferred conspecific-related stimuli. The results showed that chimpanzees attended: (i) significantly longer towards conspecific skulls than other species skulls (particularly in forward-facing and to a lesser extent diagonal orientations); (ii) significantly longer towards conspecific faces than other species faces at forward-facing and diagonal orientations; (iii) longer towards chimpanzee faces compared with chimpanzee skulls and skull-shaped stones, and (iv) attended significantly longer to the teeth, similar to findings for elephants. We suggest that chimpanzee skulls retain relevant, face-like features that arguably activate a domain-specific face module in chimpanzees' brains, guiding their attention.
Collapse
Affiliation(s)
- André Gonçalves
- Language and Intelligence Section, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| | - Yuko Hattori
- Center for International Collaboration and Advanced Studies in Primatology, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| | - Ikuma Adachi
- Language and Intelligence Section, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| |
Collapse
|
13
|
Ferrari C, Ciricugno A, Urgesi C, Cattaneo Z. Cerebellar contribution to emotional body language perception: a TMS study. Soc Cogn Affect Neurosci 2022; 17:81-90. [PMID: 31588511 PMCID: PMC8824541 DOI: 10.1093/scan/nsz074] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 07/23/2019] [Accepted: 09/06/2019] [Indexed: 11/14/2022] Open
Abstract
Consistent evidence suggests that the cerebellum contributes to the processing of emotional facial expressions. However, it is not yet known whether the cerebellum is recruited when emotions are expressed by body postures or movements, or whether it is recruited differently for positive and negative emotions. In this study, we asked healthy participants to discriminate between body postures (with masked face) expressing emotions of opposite valence (happiness vs anger, Experiment 1), or of the same valence (negative: anger vs sadness; positive: happiness vs surprise, Experiment 2). While performing the task, participants received online transcranial magnetic stimulation (TMS) over a region of the posterior left cerebellum and over two control sites (early visual cortex and vertex). We found that TMS over the cerebellum affected participants' ability to discriminate emotional body postures, but only when one of the emotions was negatively valenced (i.e. anger). These findings suggest that the cerebellar region we stimulated is involved in processing the emotional content conveyed by body postures and gestures. Our findings complement prior evidence on the role of the cerebellum in emotional face processing and have important implications from a clinical perspective, where non-invasive cerebellar stimulation is a promising tool for the treatment of motor, cognitive and affective deficits.
Collapse
Affiliation(s)
- Chiara Ferrari
- Department of Psychology, University of Milano–Bicocca, Milan 20126, Italy
| | - Andrea Ciricugno
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia 27100, Italy
- IRCCS Mondino Foundation, Pavia 27100, Italy
| | - Cosimo Urgesi
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society University of Udine, Udine 33100, Italy
- Scientific Institute, IRCCS E. Medea, Neuropsychiatry and Neurorehabilitation Unit, Bosisio Parini, Lecco 23900, Italy
| | - Zaira Cattaneo
- Department of Psychology, University of Milano–Bicocca, Milan 20126, Italy
- IRCCS Mondino Foundation, Pavia 27100, Italy
| |
Collapse
|
14
|
Liu K, Luo S, Abdelrehem A, Guo Y, Zhang Y, Wang X, Wang X. Facial visual attention to menton deviation: An objective evaluation by laypeople. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 123:e115-e120. [PMID: 34600150 DOI: 10.1016/j.jormas.2021.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 09/20/2021] [Accepted: 09/27/2021] [Indexed: 11/15/2022]
Abstract
PURPOSE This study aimed to quantitatively evaluate whether the severity of menton deviation (MD) influenced facial perceptions of laypeople. We also aimed to determine the effectiveness of surgery in normalizing the distribution of the facial visual attention of laypeople. METHODS The scanning paths of 177 laypeople were recorded using an eye tracking device while observing images of individuals without MD and pre- and post-treatment subjects with different degrees of MD. The fixation durations on the areas of interest (AOIs) in each group were compared and analysed. RESULTS When observing the images of non-MD subjects, the eyes were the focus of the most significant fixation (higher than the fixations on the nose and lower face). When the MD increased to 3°, attention on the lower face increased (p = 0.001) with decreased attention to the eyes (p = 0.0126). At an MD of 9°, attention to the lower face sharply increased, even more so than that to the eyes, with decreased attention to the nose (p = 0.0104). Compared with the findings for the post-treatment images, the laypeople who observed the pretreatment images focused longer on the lower face and less on the eyes and nose (p = 0.001, p = 0.0322 and p = 0.0023, respectively). The distribution of the fixation duration when observing the post-treatment images was similar to that when observing the images of the non-MD subjects. CONCLUSIONS Laypeople can perceive an MD of 3°, which causes changes in the distribution of visual attention, with attention focusing on the MD. When the deviation reaches 9°, it is very noticeable. Surgery can normalize the distribution of the facial visual attention of laypeople, as shown by the responses to the post-treatment images.
Collapse
Affiliation(s)
- Kai Liu
- Department of Oral and Craniomaxillofacial Surgery, Ninth People' s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, National Clinical Research Center for Oral Diseases, Shanghai, China
| | - Songyuan Luo
- Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ahmed Abdelrehem
- Department of Craniomaxillofacial and Plastic Surgery, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Yuxiang Guo
- Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yujie Zhang
- Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xinxi Wang
- Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xudong Wang
- Department of Oral and Craniomaxillofacial Surgery, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Shanghai Key Laboratory of Stomatology & Shanghai Research Institute of Stomatology, National Clinical Research Center for Oral Diseases, Shanghai, China.
| |
Collapse
|
15
|
Shan B, Werger M, Huang W, Giddon DB. Quantitating the art and science of esthetic clinical success. J World Fed Orthod 2021; 10:49-58. [PMID: 33933391 DOI: 10.1016/j.ejwf.2021.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 03/23/2021] [Accepted: 03/23/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND Beginning with the biobehavioral bases of esthetic experiences, this article presents a quantitative analytic review of the motives and methods of providers and consumers of orthodontic treatment. METHOD A primary focus is determining the anthropometric bases of self and others' perceived preference and satisfaction with changes in facial appearance. These quantitative analyses have been based on determining the frequency and magnitude of reliability and validity measures of diagnosis, treatment, and satisfaction outcome. Socioeconomic considerations are also quantitated regarding the discrepancy between objective need for treatment as determined for example by the Index of Orthodontic Treatment Need and the subjective demand for treatment. RESULTS The major contribution of this article is the quantitation of the components of esthetic experience from sensation of perception using psycho physical methods, such as Perceptometrics, for determining the morphological basis of perceived facial attractiveness adjusted for ethnocultural differences updated by 3-dimensional and artificial intelligence technology. Recent quantitation of smile components has also added to the measures of esthetically successful treatment. Further contribution of orthodontists to mental and physical health is demonstrated by the differences between perceived personality attributes in profile and full-frontal views of symmetric and asymmetric faces. Such information can facilitate the clinician's ability to determine the ideational representation of the patients' perceived pre- and post-treatment outcome. CONCLUSION The quantitative analysis of the motives and methods involved in the orthodontic treatment process has been combined with the neurophysiological correlates of producing and observing/evaluation of the esthetic experiences of both patients and orthodontists/dentists.
Collapse
Affiliation(s)
- Bo Shan
- DMD Program, Rutgers School of Dental Medicine, Newark, NJ
| | - Marisa Werger
- DMD Candidate Class of 2022, Harvard School of Dental Medicine, Boston, MA
| | - Wei Huang
- Department of Orthodontics, Rutgers School of Dental Medicine, Newark, NJ
| | - Donald B Giddon
- Developmental Biology, Harvard School of Dental Medicine, Boston, MA.
| |
Collapse
|
16
|
Kinchella J, Guo K. Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias. Perception 2021; 50:328-342. [PMID: 33709837 DOI: 10.1177/03010066211000270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants' expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
Collapse
|
17
|
Abstract
While it has been established that expression perception is rapid, it is unclear whether early appraisal mechanisms invoke holistic perception. In the current study, we defined gist perception as the appraisal of a stimulus within a single glance (<125 ms). We employed the expression composite task used previously by Tanaka and colleagues in a 2012 study, with several critical modifications: (i) we developed stimuli that eliminated contrast artifacts, (ii) we employed a masking technique to abolish low-level cues, and (iii) all the face stimuli were composite stimuli compared to mix of natural and composite stimuli previously used. Participants were shown a congruent (e.g. top: angry/ bottom: angry) or incongruent (e.g. top: angry/ bottom: happy) expression for 17, 50 or 250 ms and instructed to selectively attend to the cued expression depicted in the top (or bottom) half of the composite face and ignore the uncued portion. Compared to the isolated condition, a facilitation effect was found for congruent angry expressions, as well as an interference effect for incongruent happy and angry expressions at the shortest exposure duration of 17. Together these results provide evidence that the holistic gist perception of expression cannot be overridden by selective attention.
Collapse
Affiliation(s)
- Elizabeth Gregory
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
| | - James W Tanaka
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Xiaoyi Liu
- Department of Psychology, University of Victoria, Victoria, Canada
| |
Collapse
|
18
|
Stevenson N, Guo K. Image Valence Modulates the Processing of Low-Resolution Affective Natural Scenes. Perception 2020; 49:1057-1068. [PMID: 32924858 DOI: 10.1177/0301006620957213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In natural vision, noisy and distorted visual inputs often change our perceptual strategy in scene perception. However, it is unclear the extent to which the affective meaning embedded in the degraded natural scenes modulates our scene understanding and associated eye movements. In this eye-tracking experiment by presenting natural scene images with different categories and levels of emotional valence (high-positive, medium-positive, neutral/low-positive, medium-negative, and high-negative), we systematically investigated human participants' perceptual sensitivity (image valence categorization and arousal rating) and image-viewing gaze behaviour to the changes of image resolution. Our analysis revealed that reducing image resolution led to decreased valence recognition and arousal rating, decreased number of fixations in image-viewing but increased individual fixation duration, and stronger central fixation bias. Furthermore, these distortion effects were modulated by the scene valence with less deterioration impact on the valence categorization of negatively valenced scenes and on the gaze behaviour in viewing of high emotionally charged (high-positive and high-negative) scenes. It seems that our visual system shows a valence-modulated susceptibility to the image distortions in scene perception.
Collapse
|
19
|
Visual exploration of emotional body language: a behavioural and eye-tracking study. PSYCHOLOGICAL RESEARCH 2020; 85:2326-2339. [PMID: 32920675 DOI: 10.1007/s00426-020-01416-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 09/01/2020] [Indexed: 10/23/2022]
Abstract
Bodily postures are essential to correctly comprehend others' emotions and intentions. Nonetheless, very few studies focused on the pattern of eye movements implicated in the recognition of emotional body language (EBL), demonstrating significant differences in relation to different emotions. A yet unanswered question regards the presence of the "left-gaze bias" (i.e. the tendency to look first, to make more fixations and to spend more looking time on the left side of centrally presented stimuli) while scanning bodies. Hence, the present study aims at exploring both the presence of a left-gaze bias and the modulation of EBL visual exploration mechanisms, by investigating the fixation patterns (number of fixations and latency of the first fixation) of participants while judging the emotional intensity of static bodily postures (Angry, Happy and Neutral, without head). While results on the latency of first fixations demonstrate for the first time the presence of the left-gaze bias while scanning bodies, suggesting that it could be related to the stronger expressiveness of the left hand (from the observer's point of view), results on fixations' number only partially fulfil our hypothesis. Moreover, an opposite viewing pattern between Angry and Happy bodily postures is showed. In sum, the present results, by integrating the spatial and temporal dimension of gaze exploration patterns, shed new light on EBL visual exploration mechanisms.
Collapse
|
20
|
Maza A, Moliner B, Ferri J, Llorens R. Visual Behavior, Pupil Dilation, and Ability to Identify Emotions From Facial Expressions After Stroke. Front Neurol 2020; 10:1415. [PMID: 32116988 PMCID: PMC7016192 DOI: 10.3389/fneur.2019.01415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 12/27/2019] [Indexed: 11/16/2022] Open
Abstract
Social cognition is the innate human ability to interpret the emotional state of others from contextual verbal and non-verbal information, and to self-regulate accordingly. Facial expressions are one of the most relevant sources of non-verbal communication, and their interpretation has been extensively investigated in the literature, using both behavioral and physiological measures, such as those derived from visual activity and visual responses. The decoding of facial expressions of emotion is performed by conscious and unconscious cognitive processes that involve a complex brain network that can be damaged after cerebrovascular accidents. A diminished ability to identify facial expressions of emotion has been reported after stroke, which has traditionally been attributed to impaired emotional processing. While this can be true, an alteration in visual behavior after brain injury could also negatively contribute to this ability. This study investigated the accuracy, distribution of responses, visual behavior, and pupil dilation of individuals with stroke while identifying emotional facial expressions. Our results corroborated impaired performance after stroke and exhibited decreased attention to the eyes, evidenced by a diminished time and number of fixations made in this area in comparison to healthy subjects and comparable pupil dilation. The differences in visual behavior reached statistical significance in some emotions when comparing individuals with stroke with impaired performance with healthy subjects, but not when individuals post-stroke with comparable performance were considered. The performance dependence of visual behavior, although not determinant, might indicate that altered visual behavior could be a negatively contributing factor for emotion recognition from facial expressions.
Collapse
Affiliation(s)
- Anny Maza
- Neurorehabilitation and Brain Research Group, Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain
| | - Belén Moliner
- NEURORHB, Servicio de Neurorrehabilitación de Hospitales Vithas, Valencia, Spain
| | - Joan Ferri
- NEURORHB, Servicio de Neurorrehabilitación de Hospitales Vithas, Valencia, Spain
| | - Roberto Llorens
- Neurorehabilitation and Brain Research Group, Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, Spain.,NEURORHB, Servicio de Neurorrehabilitación de Hospitales Vithas, Valencia, Spain
| |
Collapse
|
21
|
Huang P, Cai B, Zhou C, Wang W, Wang X, Gao D, Bao B. Contribution of the mandible position to the facial profile perception of a female facial profile: An eye-tracking study. Am J Orthod Dentofacial Orthop 2019; 156:641-652. [PMID: 31677673 DOI: 10.1016/j.ajodo.2018.11.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 11/01/2018] [Accepted: 11/01/2018] [Indexed: 10/25/2022]
Abstract
INTRODUCTION Studies concerning the visual attention of laypersons viewing the soft tissue facial profile of men and women with malocclusion are lacking. This study aimed to determine the visual attention to the facial profile of patients with different levels of mandibular protrusion and facial background attractiveness using an eye-tracking device. METHODS The scanning paths of 54 Chinese laypersons (50% female, 50% male, aged 18-23 years) were recorded by an eye-tracking device when they observed composite female facial profile images (n = 24), which were combinations of different degrees of mandibular protrusion (normal, slight, moderate, and severe) and different levels of facial background attractiveness (attractive, average, and unattractive). Dependent variables (fixation duration and first fixation time) were analyzed using repeated-measures factorial analysis of variance. RESULTS For normal mandibular profiles, the fixation duration of the eyes was significantly higher than that of other facial features (P <0.001). The lower face and nose received the least attention. As the degree of protrusion increased from slight to moderate, more attention was drawn to the lower face accompanied by less attention to eyes in the unattractive group (P <0.05). When protrusion degree increased from moderate to severe, attention shifted from nose to lower face significantly in the attractive group (P <0.05). Attention shift from eyes to lower face was also found in the average group when protrusion degree rose to moderate protrusion from normal profile (P <0.05). A significant interaction between facial attractiveness and mandibular protrusion was found in the lower face duration (P = 0.020). The threshold point (the point of mandibular protrusion degree that evoked attention to the lower face) of the attractive facial background was higher than that of the unattractive background. Once evoked, the effect of mandibular protrusion of the attractive group tended to be stronger than that of the unattractive group, though without statistical difference. CONCLUSIONS Eyes are the most salient area. The increasing degree of mandibular protrusion tends to draw attention to the lower face from other facial features. Background attractiveness can modify this behavior.
Collapse
Affiliation(s)
- Peishan Huang
- Orthodontic Department, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Bin Cai
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Chen Zhou
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Weicai Wang
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Xi Wang
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Dingguo Gao
- Psychology Department, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Brain Function and Disease, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience, Mental Health, Guangzhou, Guangdong, China.
| | - Baicheng Bao
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China.
| |
Collapse
|
22
|
Calbi M, Aldouby H, Gersht O, Langiulli N, Gallese V, Umiltà MA. Haptic Aesthetics and Bodily Properties of Ori Gersht's Digital Art: A Behavioral and Eye-Tracking Study. Front Psychol 2019; 10:2520. [PMID: 31787915 PMCID: PMC6853892 DOI: 10.3389/fpsyg.2019.02520] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 10/24/2019] [Indexed: 11/18/2022] Open
Abstract
Experimental aesthetics has shed light on the involvement of pre-motor areas in the perception of abstract art. However, the contribution of texture perception to aesthetic experience is still understudied. We hypothesized that digital screen-based art, despite its immateriality, might suggest potential sensorimotor stimulation. Original born-digital works of art were selected and manipulated by the artist himself. Five behavioral parameters: Beauty, Liking, Touch, Proximity, and Movement, were investigated under four experimental conditions: Resolution (high/low), and Magnitude (Entire image/detail). These were expected to modulate the quantity of material and textural information afforded by the image. While the Detail condition afforded less content-related information, our results show that it augmented the image's haptic appeal. High Resolution improved the haptic and aesthetic properties of the images. Furthermore, aesthetic ratings positively correlated with sensorimotor ratings. Our results demonstrate a strict relation between the aesthetic and sensorimotor/haptic qualities of the images, empirically establishing a relationship between beholders' bodily involvement and their aesthetic judgment of visual works of art. In addition, we found that beholders' oculomotor behavior is selectively modulated by the perceptual manipulations being performed. The eye-tracking results indicate that the observation of the Entire, original images is the only condition in which the latency of the first fixation is shorter when participants gaze to the left side of the images. These results thus demonstrate the existence of a left-side bias during the observation of digital works of art, in particular, while participants are observing their original version.
Collapse
Affiliation(s)
- Marta Calbi
- Unit of Neuroscience, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Hava Aldouby
- Department of Literature, Language, and the Arts, The Open University of Israel, Ra’anana, Israel
| | - Ori Gersht
- Department of Photography, University for the Creative Arts, Farnham, United Kingdom
| | - Nunzio Langiulli
- Unit of Neuroscience, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Vittorio Gallese
- Unit of Neuroscience, Department of Medicine and Surgery, University of Parma, Parma, Italy
- Department of Art History Columbia University, Italian Academy for Advanced Studies, Columbia University, New York, NY, United States
| | - Maria Alessandra Umiltà
- Department of Art History Columbia University, Italian Academy for Advanced Studies, Columbia University, New York, NY, United States
- Department of Food and Drug, University of Parma, Parma, Italy
| |
Collapse
|
23
|
Pollux PM, Craddock M, Guo K. Gaze patterns in viewing static and dynamic body expressions. Acta Psychol (Amst) 2019; 198:102862. [PMID: 31226535 DOI: 10.1016/j.actpsy.2019.05.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/09/2019] [Accepted: 05/26/2019] [Indexed: 11/25/2022] Open
Abstract
Evidence for the importance of bodily cues for emotion recognition has grown over the last two decades. Despite this growing literature, it is underspecified how observers view whole bodies for body expression recognition. Here we investigate to which extent body-viewing is face- and context-specific when participants are categorizing whole body expressions in static (Experiment 1) and dynamic displays (Experiment 2). Eye-movement recordings showed that observers viewed the face exclusively when visible in dynamic displays, whereas viewing was distributed over head, torso and arms in static displays and in dynamic displays with faces not visible. The strong face bias in dynamic face-visible expressions suggests that viewing of the body responds flexibly to the informativeness of facial cues for emotion categorisation. However, when facial expressions are static or not visible, observers adopt a viewing strategy that includes all upper body regions. This viewing strategy is further influenced by subtle viewing biases directed towards emotion-specific body postures and movements to optimise recruitment of diagnostic information for emotion categorisation.
Collapse
|
24
|
Guo K, Li Z, Yan Y, Li W. Viewing heterospecific facial expressions: an eye-tracking study of human and monkey viewers. Exp Brain Res 2019; 237:2045-2059. [PMID: 31165915 PMCID: PMC6647127 DOI: 10.1007/s00221-019-05574-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 05/31/2019] [Indexed: 11/03/2022]
Abstract
Common facial expressions of emotion have distinctive patterns of facial muscle movements that are culturally similar among humans, and perceiving these expressions is associated with stereotypical gaze allocation at local facial regions that are characteristic for each expression, such as eyes in angry faces. It is, however, unclear to what extent this 'universality' view can be extended to process heterospecific facial expressions, and how 'social learning' process contributes to heterospecific expression perception. In this eye-tracking study, we examined face-viewing gaze allocation of human (including dog owners and non-dog owners) and monkey observers while exploring expressive human, chimpanzee, monkey and dog faces (positive, neutral and negative expressions in human and dog faces; neutral and negative expressions in chimpanzee and monkey faces). Human observers showed species- and experience-dependent expression categorization accuracy. Furthermore, both human and monkey observers demonstrated different face-viewing gaze distributions which were also species dependent. Specifically, humans predominately attended at human eyes but animal mouth when judging facial expressions. Monkeys' gaze distributions in exploring human and monkey faces were qualitatively different from exploring chimpanzee and dog faces. Interestingly, the gaze behaviour of both human and monkey observers were further affected by their prior experience of the viewed species. It seems that facial expression processing is species dependent, and social learning may play a significant role in discriminating even rudimentary types of heterospecific expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| | - Zhihan Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Yin Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| | - Wu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG, Beijing Normal University, Beijing, 100875, China
| |
Collapse
|
25
|
Rodway V, Tatham B, Guo K. Effect of model race and viewing perspective on body attractiveness and body size assessment in young Caucasian women: an eye-tracking study. PSYCHOLOGICAL RESEARCH 2018; 83:347-356. [PMID: 30554329 PMCID: PMC6434025 DOI: 10.1007/s00426-018-1138-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 12/10/2018] [Indexed: 11/17/2022]
Abstract
Research has indicated that Caucasian women gaze more often at waist–hip and chest regions than other local body areas when assessing female body attractiveness and body size, and this stereotypical gaze distribution is further modulated by their own body satisfaction and body composition. However, little is known whether the model race and viewing perspective could affect women’s body-viewing gaze behaviour and body perception. Here, we presented female body images of Caucasian, Asian and African avatars in a continuum of common dress sizes in full frontal, mid-profile and rear view, and asked young Caucasian women to rate the perceived body attractiveness and body size. Their body-viewing gaze distributions were then correlated with their behavioural responses, their own body composition and body satisfaction. Our analysis revealed a clear in-group favouritism, in which Caucasian women tended to rate Caucasian avatars more attractive and slimmer than Asian and African avatars. Their body-viewing gaze patterns, on the other hand, were not affected by avatar race but were modulated by viewing perspectives. The frontal-view body (especially upper-body and waist–hip regions) attracted the highest proportion of viewing time, followed by the mid-profile view and then the rear-view body. Furthermore, our participants’ own body composition and satisfaction level did not affect their judgement of other women’s body attractiveness and body size, but could influence their gaze allocation at local body features. It seems that both body perception and body-viewing gaze behaviour are subject to group and individual biases.
Collapse
Affiliation(s)
- Victoria Rodway
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Bethany Tatham
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
| |
Collapse
|
26
|
Guo K, Soornack Y, Settle R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vision Res 2018; 157:112-122. [PMID: 29496513 DOI: 10.1016/j.visres.2018.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 02/02/2018] [Accepted: 02/04/2018] [Indexed: 11/29/2022]
Abstract
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, UK.
| | | | | |
Collapse
|
27
|
Li B, Yang F. An Across-Target Study on Visual Attentions in Facial Expression Recognition. Interdiscip Sci 2018; 10:367-374. [PMID: 29383565 DOI: 10.1007/s12539-018-0281-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 01/09/2018] [Accepted: 01/11/2018] [Indexed: 11/27/2022]
Abstract
As a simulation of human expression recognition, the studies on automatic expression recognition expect to draw useful enlightenment through close, accurate observation on human expression processing via advanced devices. Eye-trackers are mostly used devices that are technically designed to obtain eye-movement data. However, due to the discrepancy between target faces, across-target analysis is limited in these studies, and this much reduces the chance of finding the latent eye-behavior patterns. Through the utilization of correspondences between targets, this study achieves an across-target analysis to explore the attention pattern in expression recognition. The fixations from different targets are mapped onto a synthetic face to generate an across-target fixation map, and then tokenized with area of interests (AOI), measured in receiver operating characteristic (ROC) space, modeled by linear regression and compared through Pearson's correlation. The resulted averaged correlation values vary in the range (0.60, 0.86), and illustrate that there is significant similarity between subjects when recognizing the same expression classes.
Collapse
Affiliation(s)
- Baomin Li
- Faculty of Education, East China Normal University, Shanghai, 200062, China
| | - Fenglei Yang
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China.
| |
Collapse
|
28
|
Li S, Li P, Wang W, Zhu X, Luo W. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces. Psychophysiology 2017; 55:e13039. [DOI: 10.1111/psyp.13039] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Revised: 11/02/2017] [Accepted: 11/07/2017] [Indexed: 12/31/2022]
Affiliation(s)
- Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience; Liaoning Normal University; Dalian China
| | - Ping Li
- Research Center of Brain and Cognitive Neuroscience; Liaoning Normal University; Dalian China
| | - Wei Wang
- Research Center of Brain and Cognitive Neuroscience; Liaoning Normal University; Dalian China
| | - Xiangru Zhu
- Institute of Cognition, Brain and Health; Henan University; Kaifeng China
- Institute of Psychology and Behavior; Henan University; Kaifeng China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience; Liaoning Normal University; Dalian China
- Laboratory of Emotion and Mental Health; Chongqing University of Arts and Sciences; Chongqing China
| |
Collapse
|
29
|
Faces are special, but facial expressions aren’t: Insights from an oculomotor capture paradigm. Atten Percept Psychophys 2017; 79:1438-1452. [DOI: 10.3758/s13414-017-1313-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Abstract
Individuals vary in perceptual accuracy when categorising facial expressions, yet it is unclear how these individual differences in non-clinical population are related to cognitive processing stages at facial information acquisition and interpretation. We tested 104 healthy adults in a facial expression categorisation task, and correlated their categorisation accuracy with face-viewing gaze allocation and personal traits assessed with Autism Quotient, anxiety inventory and Self-Monitoring Scale. The gaze allocation had limited but emotion-specific impact on categorising expressions. Specifically, longer gaze at the eyes and nose regions were coupled with more accurate categorisation of disgust and sad expressions, respectively. Regarding trait measurements, higher autistic score was coupled with better recognition of sad but worse recognition of anger expressions, and contributed to categorisation bias towards sad expressions; whereas higher anxiety level was associated with greater categorisation accuracy across all expressions and with increased tendency of gazing at the nose region. It seems that both anxiety and autistic-like traits were associated with individual variation in expression categorisation, but this association is not necessarily mediated by variation in gaze allocation at expression-specific local facial regions. The results suggest that both facial information acquisition and interpretation capabilities contribute to individual differences in expression categorisation within non-clinical populations.
Collapse
Affiliation(s)
- Corinne Green
- a School of Psychology , University of Lincoln , Lincoln , UK
| | - Kun Guo
- a School of Psychology , University of Lincoln , Lincoln , UK
| |
Collapse
|
31
|
Sutherland CAM, Young AW, Rhodes G. Facial first impressions from another angle: How social judgements are influenced by changeable and invariant facial properties. Br J Psychol 2016; 108:397-415. [DOI: 10.1111/bjop.12206] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 04/05/2016] [Indexed: 11/27/2022]
Affiliation(s)
- Clare A. M. Sutherland
- ARC Centre of Excellence in Cognition and its Disorders School of Psychology University of Western Australia Crawley WA Australia
| | - Andrew W. Young
- Department of Psychology University of York Heslington North Yorkshire UK
| | - Gillian Rhodes
- ARC Centre of Excellence in Cognition and its Disorders School of Psychology University of Western Australia Crawley WA Australia
| |
Collapse
|
32
|
Gavin CJ, Houghton S, Guo K. Dog owners show experience-based viewing behaviour in judging dog face approachability. PSYCHOLOGICAL RESEARCH 2015; 81:75-82. [PMID: 26486649 DOI: 10.1007/s00426-015-0718-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Accepted: 10/09/2015] [Indexed: 11/29/2022]
Abstract
Our prior visual experience plays a critical role in face perception. We show superior perceptual performance for differentiating conspecific (vs non-conspecific), own-race (vs other-race) and familiar (vs unfamiliar) faces. However, it remains unclear whether our experience with faces of other species would influence our gaze allocation for extracting salient facial information. In this eye-tracking study, we asked both dog owners and non-owners to judge the approachability of human, monkey and dog faces, and systematically compared their behavioural performance and gaze pattern associated with the task. Compared to non-owners, dog owners assessed dog faces with shorter time and fewer fixations, but gave higher approachability ratings. The gaze allocation within local facial features was also modulated by the ownership. The averaged proportion of the fixations and viewing time directed at the dog mouth region were significantly less for the dog owners, and more experienced dog owners tended to look more at the dog eyes, suggesting the adoption of a prior experience-based viewing behaviour for assessing dog approachability. No differences in behavioural performance and gaze pattern were observed between dog owners and non-owners when judging human and monkey faces, implying that the dog owner's experience-based gaze strategy for viewing dog faces was not transferable across faces of other species.
Collapse
Affiliation(s)
- Carla Jade Gavin
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Sarah Houghton
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
| |
Collapse
|