1
|
Tamura H, Nakauchi S, Minami T. Glossiness perception and its pupillary response. Vision Res 2024; 219:108393. [PMID: 38579405 DOI: 10.1016/j.visres.2024.108393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 03/18/2024] [Accepted: 03/20/2024] [Indexed: 04/07/2024]
Abstract
Recent studies have revealed that pupillary response changes depend on perceptual factors such as subjective brightness caused by optical illusions and luminance. However, the manner in which the perceptual factor that is derived from the glossiness perception of object surfaces affects the pupillary response remains unclear. We investigated the relationship between the glossiness perception and pupillary response through a glossiness rating experiment that included recording the pupil diameter. We prepared general object images (original) and randomized images (shuffled) that comprised the same images with randomized small square regions as stimuli. The image features were controlled by matching the luminance histogram. The observers were asked to rate the perceived glossiness of the stimuli presented for 3,000 ms and the changes in their pupil diameters were recorded. Images with higher glossiness ratings constricted the pupil size more than those with lower glossiness ratings at the peak constriction of the pupillary responses during the stimulus duration. The linear mixed-effects model demonstrated that the glossiness rating, image category (original/shuffled), variance of the luminance histogram, and stimulus area were most effective in predicting the pupillary responses. These results suggest that the illusory brightness obtained by the image regions of high-glossiness objects, such as specular highlights, induce pupil constriction.
Collapse
Affiliation(s)
- Hideki Tamura
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan.
| | - Shigeki Nakauchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | - Tetsuto Minami
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| |
Collapse
|
2
|
Kuraguchi K, Nittono H. Face inversion effect on perceived cuteness of infant faces. Perception 2023; 52:844-852. [PMID: 37661828 DOI: 10.1177/03010066231198417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Research has demonstrated that attractiveness evaluations of adult faces were less accurate when faces were inverted than upright. It remains unknown, however, whether a similar effect applies to perceived cuteness of infants, which is assumed to be based on elemental facial features called the "baby schema." In this research, we studied the face inversion effect on perceived cuteness of infant faces in a rating task and a two-alternative forced-choice (2AFC) task. We also examined beauty as a control dimension. Although the rating task revealed no inversion effect, the 2AFC task showed poorer discrimination performance with inverted faces than with upright faces in both evaluations. These results indicate that infant cuteness and beauty dimensions are correlated well with each other, and their perception not only relies on elemental features that are not strongly affected by inversion but is also affected by holistic facial configurations when a detailed comparison is required.
Collapse
|
3
|
Zhang K, Yuan Y, Chen J, Wang G, Chen Q, Luo M. Eye Tracking Research on the Influence of Spatial Frequency and Inversion Effect on Facial Expression Processing in Children with Autism Spectrum Disorder. Brain Sci 2022; 12:brainsci12020283. [PMID: 35204046 PMCID: PMC8870542 DOI: 10.3390/brainsci12020283] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/11/2022] [Accepted: 02/16/2022] [Indexed: 12/10/2022] Open
Abstract
Facial expression processing mainly depends on whether the facial features related to expressions can be fully acquired, and whether the appropriate processing strategies can be adopted according to different conditions. Children with autism spectrum disorder (ASD) have difficulty accurately recognizing facial expressions and responding appropriately, which is regarded as an important cause of their social disorders. This study used eye tracking technology to explore the internal processing mechanism of facial expressions in children with ASD under the influence of spatial frequency and inversion effects for improving their social disorders. The facial expression recognition rate and eye tracking characteristics of children with ASD and typical developing (TD) children on the facial area of interest were recorded and analyzed. The multi-factor mixed experiment results showed that the facial expression recognition rate of children with ASD under various conditions was significantly lower than that of TD children. TD children had more visual attention to the eyes area. However, children with ASD preferred the features of the mouth area, and lacked visual attention and processing of the eyes area. When the face was inverted, TD children had the inversion effect under all three spatial frequency conditions, which was manifested as a significant decrease in expression recognition rate. However, children with ASD only had the inversion effect under the LSF condition, indicating that they mainly used a featural processing method and had the capacity of configural processing under the LSF condition. The eye tracking results showed that when the face was inverted or facial feature information was weakened, both children with ASD and TD children would adjust their facial expression processing strategies accordingly, to increase the visual attention and information processing of their preferred areas. The fixation counts and fixation duration of TD children on the eyes area increased significantly, while the fixation duration of children with ASD on the mouth area increased significantly. The results of this study provided theoretical and practical support for facial expression intervention in children with ASD.
Collapse
Affiliation(s)
- Kun Zhang
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Yishuang Yuan
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Jingying Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
- Correspondence:
| | - Guangshuai Wang
- School of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Qian Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Meijuan Luo
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| |
Collapse
|
4
|
Yao L, Dai Q, Wu Q, Liu Y, Yu Y, Guo T, Zhou M, Yang J, Takahashi S, Ejima Y, Wu J. Eye Size Affects Cuteness in Different Facial Expressions and Ages. Front Psychol 2022; 12:674456. [PMID: 35087437 PMCID: PMC8786738 DOI: 10.3389/fpsyg.2021.674456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 12/01/2021] [Indexed: 11/13/2022] Open
Abstract
Researchers have suggested that infants exhibiting baby schema are considered cute. These similar studies have mainly focused on changes in overall baby schema facial features. However, whether a change in only eye size affects the perception of cuteness across different facial expressions and ages has not been explicitly evaluated until now. In the present study, a paired comparison method and 7-point scale were used to investigate the effects of eye size on perceived cuteness across facial expressions (positive, neutral, and negative) and ages (adults and infants). The results show that stimuli with large eyes were perceived to be cuter than both unmanipulated eyes and small eyes across all facial expressions and age groups. This suggests not only that the effect of baby schema on cuteness is based on changes in a set of features but also that eye size as an individual feature can affect the perception of cuteness.
Collapse
Affiliation(s)
- Lichang Yao
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Qi Dai
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Qiong Wu
- School of Education, Suzhou University of Science and Technology, Suzhou, China
| | - Yang Liu
- School of Education, Suzhou University of Science and Technology, Suzhou, China
| | - Yiyang Yu
- Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Ting Guo
- Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Mengni Zhou
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jiajia Yang
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Satoshi Takahashi
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Yoshimichi Ejima
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China.,School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| |
Collapse
|