1
|
Zhao H, Chen W, Li F, Wang X, Pan X, Liu Y, Wang L, Sun W, Li F, Jiang S. Dissecting the long-term neurobehavioral impact of embryonic benz[a]anthracene exposure on zebrafish: Social dysfunction and molecular pathway activation. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 930:172615. [PMID: 38657801 DOI: 10.1016/j.scitotenv.2024.172615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/11/2024] [Accepted: 04/17/2024] [Indexed: 04/26/2024]
Abstract
Benz[a]anthracene (BaA), a prevalent environmental contaminant within the polycyclic aromatic hydrocarbon class, poses risks to both human health and aquatic ecosystems. The impact of BaA on neural development and subsequent social behavior patterns remains inadequately explored. In this investigation, we employed the zebrafish as a model to examine the persisting effects of BaA exposure on social behaviors across various developmental stages, from larvae, juveniles to adults, following embryonic exposure. Our findings indicate that BaA exposure during embryogenesis yields lasting neurobehavioral deficits into adulthood. Proteomic analysis highlights that BaA may impair neuro-immune crosstalk in zebrafish larvae. Remarkably, our proteomic data also hint at the activation of the aryl hydrocarbon receptor (AHR) and cytochrome P450 1A (CYP1A) pathway by BaA, leading to the hypothesis that this pathway may be implicated in the disruption of neuro-immune interactions, contributing to observable behavioral disruptions. In summary, our findings suggest that early exposure to BaA disrupts social behaviors, such as social ability and shoaling behaviors, from the larval stage through to maturity in zebrafish, potentially through the detrimental effects on neuro-immune processes mediated by the AHR-CYP1A pathway.
Collapse
Affiliation(s)
- Haichu Zhao
- Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China; College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
| | - Weiran Chen
- Ministry of Education and Shanghai Key Laboratory of Children's Environmental Health, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200092, China; Department of Developmental and Behavioral Pediatric & Child Primary Care, Brain and Behavioral Research Unit of Shanghai Institute for Pediatric Research, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Fei Li
- Biomedical Analysis Center, Army Medical University, Chongqing 400038, China
| | - Xiaoyang Wang
- Biomedical Analysis Center, Army Medical University, Chongqing 400038, China
| | - Xin Pan
- Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China
| | - Yang Liu
- Biomedical Analysis Center, Army Medical University, Chongqing 400038, China
| | - Liting Wang
- Biomedical Analysis Center, Army Medical University, Chongqing 400038, China
| | - Wei Sun
- Biomedical Analysis Center, Army Medical University, Chongqing 400038, China
| | - Fei Li
- Ministry of Education and Shanghai Key Laboratory of Children's Environmental Health, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200092, China; Department of Developmental and Behavioral Pediatric & Child Primary Care, Brain and Behavioral Research Unit of Shanghai Institute for Pediatric Research, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Shan Jiang
- Institute for Advanced Study, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
2
|
Gobbo S, Lega C, De Sandi A, Daini R. The role of preSMA and STS in face recognition: A transcranial magnetic stimulation (TMS) study. Neuropsychologia 2024; 198:108877. [PMID: 38555065 DOI: 10.1016/j.neuropsychologia.2024.108877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 03/22/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Current models propose that facial recognition is mediated by two independent yet interacting anatomo-functional systems: one processing facial features mainly mediated by the Fusiform Face Area and the other involved in the extraction of dynamic information from faces, subserved by Superior Temporal Sulcus (STS). Also, the pre-Supplementary Motor Area (pre-SMA) is implicated in facial expression processing as it is involved in its motor mimicry. However, the literature only shows evidence of the implication of STS and preSMA for facial expression recognition, without relating it to face recognition. In addition, the literature shows a facilitatory role of facial motion in the recognition of unfamiliar faces, particularly for poor recognizers. The present study aimed at studying the role of STS and preSMA in unfamiliar face recognition in people with different face recognition skills. 34 healthy participants received repetitive transcranial magnetic stimulation over the right posterior STS, pre-SMA and as sham during a task of matching of faces encoded through: facial expression, rigid head movement or as static (i.e., absence of any facial or head motion). All faces were represented without emotional content. Results indicate that STS has a direct role in recognizing identities through rigid head movement and an indirect role in facial expression processing. This dissociation represents a step forward with respect to current face processing models suggesting that different types of motion involve separate brain and cognitive processes. PreSMA interacts with face recognition skills, increasing the performance of poor recognizers and decreasing that of good recognizers in all presentation conditions. Together, the results suggest the use of at least partially different mechanisms for face recognition in poor and good recognizers and a different role of STS and preSMA in face recognition.
Collapse
Affiliation(s)
- Silvia Gobbo
- Department of Psychology, University of Milan-Bicocca, Milan, Italy.
| | - Carlotta Lega
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | | | - Roberta Daini
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| |
Collapse
|
3
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
4
|
Yeung SC, Sidhu J, Youn S, Schaefer HRH, Barton JJS, Corrow SL. The role of the upper and lower face in the recognition of facial identity in dynamic stimuli. Vision Res 2023; 206:108194. [PMID: 36801665 PMCID: PMC10085847 DOI: 10.1016/j.visres.2023.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/18/2023]
Abstract
Studies with static faces find that upper face halves are more easily recognized than lower face halves-an upper-face advantage. However, faces are usually encountered as dynamic stimuli, and there is evidence that dynamic information influences face identity recognition. This raises the question of whether dynamic faces also show an upper-face advantage. The objective of this study was to examine whether familiarity for recently learned faces was more accurate for upper or lower face halves, and whether this depended upon whether the face was presented as static or dynamic. In Experiment 1, subjects learned a total of 12 faces--6 static images and 6 dynamic video-clips of actors in silent conversation. In experiment 2, subjects learned 12 faces, all dynamic video-clips. During the testing phase of Experiments 1 (between subjects) and 2 (within subjects), subjects were asked to recognize upper and lower face halves from either static images and/or dynamic clips. The data did not provide evidence for a difference in the upper-face advantage between static and dynamic faces. However, in both experiments, we found an upper-face advantage, consistent with prior literature, for female faces, but not for male faces. In conclusion, the use of dynamic stimuli may have little effect on the presence of an upper-face advantage, especially when the static comparison contains a series of static images, rather than a single static image, and is of sufficient image quality. Future studies could investigate the influence of face gender on the presence of an upper-face advantage.
Collapse
Affiliation(s)
- Shanna C Yeung
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jhunam Sidhu
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sena Youn
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Heidi R H Schaefer
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jason J S Barton
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sherryse L Corrow
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada.
| |
Collapse
|
5
|
Xiao NG, Angeli V, Fang W, Manera V, Liu S, Castiello U, Ge L, Lee K, Simion F. The discrimination of expressions in facial movements by infants: A study with point-light displays. J Exp Child Psychol 2023; 232:105671. [PMID: 37003155 DOI: 10.1016/j.jecp.2023.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 02/27/2023] [Accepted: 02/28/2023] [Indexed: 04/03/2023]
Abstract
Perceiving facial expressions is an essential ability for infants. Although previous studies indicated that infants could perceive emotion from expressive facial movements, the developmental change of this ability remains largely unknown. To exclusively examine infants' processing of facial movements, we used point-light displays (PLDs) to present emotionally expressive facial movements. Specifically, we used a habituation and visual paired comparison (VPC) paradigm to investigate whether 3-, 6-, and 9-month-olds could discriminate between happy and fear PLDs after being habituated with a happy PLD (happy-habituation condition) or a fear PLD (fear-habituation condition). The 3-month-olds discriminated between the happy and fear PLDs in both the happy- and fear-habituation conditions. The 6- and 9-month-olds showed discrimination only in the happy-habituation condition but not in the fear-habituation condition. These results indicated a developmental change in processing expressive facial movements. Younger infants tended to process low-level motion signals regardless of the depicted emotions, and older infants tended to process expressions, which emerged in familiar facial expressions (e.g., happy). Additional analyses of individual difference and eye movement patterns supported this conclusion. In Experiment 2, we concluded that the findings of Experiment 1 were not due to a spontaneous preference for fear PLDs. Using inverted PLDs, Experiment 3 further suggested that 3-month-olds have already perceived PLDs as face-like stimuli.
Collapse
Affiliation(s)
- Naiqi G Xiao
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4L8, Canada.
| | - Valentina Angeli
- Department of Developmental and Social Psychology, University of Padova, 35131 Padova, Italy
| | - Wei Fang
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4L8, Canada
| | - Valeria Manera
- Cognition Behaviour Technology (CoBTeK), EA 7276, Edmond and Lily Safra Center, University of Nice Sophia Antipolis, 06000 Nice, France
| | - Shaoying Liu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Umberto Castiello
- Department of General Psychology, University of Padova, 35131 Padova, Italy; Cognitive Neuroscience Center, University of Padova, 35131 Padova, Italy
| | - Liezhong Ge
- Center for Psychological Sciences, Zhejiang University, Hangzhou 310027, China
| | - Kang Lee
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Ontario M5R 2X2, Canada
| | - Francesca Simion
- Department of Developmental and Social Psychology, University of Padova, 35131 Padova, Italy; Cognitive Neuroscience Center, University of Padova, 35131 Padova, Italy
| |
Collapse
|
6
|
Wong HK, Keeble DRT, Stephen ID. Do they 'look' different(ly)? Dynamic face recognition in Malaysians: Chinese, Malays and Indians compared. Br J Psychol 2023; 114 Suppl 1:134-149. [PMID: 36647242 DOI: 10.1111/bjop.12629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 12/26/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023]
Abstract
Previous cross-cultural eye-tracking studies examining face recognition discovered differences in the eye movement strategies that observers employ when perceiving faces. However, it is unclear (1) the degree to which this effect is fundamentally related to culture and (2) to what extent facial physiognomy can account for the differences in looking strategies when scanning own- and other-race faces. In the current study, Malay, Chinese and Indian young adults who live in the same multiracial country performed a modified yes/no recognition task. Participants' recognition accuracy and eye movements were recorded while viewing muted face videos of own- and other-race individuals. Behavioural results revealed a clear own-race advantage in recognition memory, and eye-tracking results showed that the three ethnic race groups adopted dissimilar fixation patterns when perceiving faces. Chinese participants preferentially attended more to the eyes than Indian participants did, while Indian participants made more and longer fixations on the nose than Malay participants did. In addition, we detected statistically significant, though subtle, differences in fixation patterns between the faces of the three races. These findings suggest that the racial differences in face-scanning patterns may be attributed both to culture and to variations in facial physiognomy between races.
Collapse
Affiliation(s)
- Hoo Keat Wong
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia
| | - David R T Keeble
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia
| | - Ian D Stephen
- School of Social Sciences, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
7
|
Zhou Y, Lin J, Zhou G. Misaligned dynamic faces are processed holistically. Vision Res 2022; 191:107970. [PMID: 34784566 DOI: 10.1016/j.visres.2021.107970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 10/12/2021] [Accepted: 10/28/2021] [Indexed: 11/16/2022]
Abstract
It was recently proposed that both experience-driven and object-based perceptual grouping contribute to holistic face processing. We investigated whether motion-as a common fate perceptual grouping cue-could enhance the holistic processing of misaligned faces. We manipulated alignment and motion (dynamic and static) of study and test faces in a modified complete composite task in which the congruency effect was regarded as an indicator of holistic processing. Participants made same-different judgments about the top halves of two sequentially presented composite faces. We observed that when the study faces were dynamic, regardless of whether the test faces were dynamic or static, misaligned-misaligned face pairs were processed holistically. When the study faces were static, misaligned-misaligned face pairs showed no holistic processing, and neither did inverted faces. These results indicate that motion can promote the holistic processing of misaligned faces. Our findings provide important insights into different types of holistic face processing, and we discuss these types as well as their relationships with each other in depth.
Collapse
Affiliation(s)
- Yu Zhou
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Jia Lin
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Guomei Zhou
- Department of Psychology, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
8
|
Abstract
Emotion perception frequently involves the integration of visual and auditory information. During multisensory emotion perception, the attention devoted to each modality can be measured by calculating the difference between trials in which the facial expression and speech input exhibit the same emotion (congruent) and trials in which the facial expression and speech input exhibit different emotions (incongruent) to determine the modality that has the strongest influence. Previous cross-cultural studies have found that individuals from Western cultures are more distracted by information in the visual modality (i.e., visual interference), whereas individuals from Eastern cultures are more distracted by information in the auditory modality (i.e., auditory interference). These results suggest that culture shapes modality interference in multisensory emotion perception. It is unclear, however, how emotion perception is influenced by cultural immersion and exposure due to migration to a new country with distinct social norms. In the present study, we investigated how the amount of daily exposure to a new culture and the length of immersion impact multisensory emotion perception in Chinese-English bilinguals who moved from China to the United States. In an emotion recognition task, participants viewed facial expressions and heard emotional but meaningless speech either from their previous Eastern culture (i.e., Asian face-Mandarin speech) or from their new Western culture (i.e., Caucasian face-English speech) and were asked to identify the emotion from either the face or voice, while ignoring the other modality. Analyses of daily cultural exposure revealed that bilinguals with low daily exposure to the U.S. culture experienced greater interference from the auditory modality, whereas bilinguals with high daily exposure to the U.S. culture experienced greater interference from the visual modality. These results demonstrate that everyday exposure to new cultural norms increases the likelihood of showing a modality interference pattern that is more common in the new culture. Analyses of immersion duration revealed that bilinguals who spent more time in the United States were equally distracted by faces and voices, whereas bilinguals who spent less time in the United States experienced greater visual interference when evaluating emotional information from the West, possibly due to over-compensation when evaluating emotional information from the less familiar culture. These findings suggest that the amount of daily exposure to a new culture and length of cultural immersion influence multisensory emotion perception in bilingual immigrants. While increased daily exposure to the new culture aids with the adaptation to new cultural norms, increased length of cultural immersion leads to similar patterns in modality interference between the old and new cultures. We conclude that cultural experience shapes the way we perceive and evaluate the emotions of others.
Collapse
|
9
|
Ogawa S, Pfaff DW, Parhar IS. Fish as a model in social neuroscience: conservation and diversity in the social brain network. Biol Rev Camb Philos Soc 2021; 96:999-1020. [PMID: 33559323 DOI: 10.1111/brv.12689] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 12/21/2022]
Abstract
Mechanisms for fish social behaviours involve a social brain network (SBN) which is evolutionarily conserved among vertebrates. However, considerable diversity is observed in the actual behaviour patterns amongst nearly 30000 fish species. The huge variation found in socio-sexual behaviours and strategies is likely generated by a morphologically and genetically well-conserved small forebrain system. Hence, teleost fish provide a useful model to study the fundamental mechanisms underlying social brain functions. Herein we review the foundations underlying fish social behaviours including sensory, hormonal, molecular and neuroanatomical features. Gonadotropin-releasing hormone neurons clearly play important roles, but the participation of vasotocin and isotocin is also highlighted. Genetic investigations of developing fish brain have revealed the molecular complexity of neural development of the SBN. In addition to straightforward social behaviours such as sex and aggression, new experiments have revealed higher order and unique phenomena such as social eavesdropping and social buffering in fish. Finally, observations interpreted as 'collective cognition' in fish can likely be explained by careful observation of sensory determinants and analyses using the dynamics of quantitative scaling. Understanding of the functions of the SBN in fish provide clues for understanding the origin and evolution of higher social functions in vertebrates.
Collapse
Affiliation(s)
- Satoshi Ogawa
- Brain Research Institute, Jeffrey Cheah School of Medicine and Health Sciences, Monash University Malaysia, Bandar Sunway, Selangor, 47500, Malaysia
| | - Donald W Pfaff
- Laboratory of Neurobiology and Behavior, Rockefeller University, New York, NY, 10065, U.S.A
| | - Ishwar S Parhar
- Brain Research Institute, Jeffrey Cheah School of Medicine and Health Sciences, Monash University Malaysia, Bandar Sunway, Selangor, 47500, Malaysia
| |
Collapse
|
10
|
Taskiran M, Kahraman N, Eroglu Erdem C. Hybrid face recognition under adverse conditions using appearance‐based and dynamic features of smile expression. IET BIOMETRICS 2020. [DOI: 10.1049/bme2.12006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Murat Taskiran
- Department of Electronics and Communication Engineering Yildiz Technical University Istanbul Turkey
| | - Nihan Kahraman
- Department of Electronics and Communication Engineering Yildiz Technical University Istanbul Turkey
| | | |
Collapse
|
11
|
Collective Housing of Mice of Different Age Groups before Maturity Affects Mouse Behavior. Behav Neurol 2020; 2020:6856935. [PMID: 33273986 PMCID: PMC7676975 DOI: 10.1155/2020/6856935] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 07/29/2020] [Accepted: 10/30/2020] [Indexed: 11/29/2022] Open
Abstract
Background Although population housing is recommended by many animal management and ethical guidelines, the effect of collective housing of mice of different age groups on mouse behavior has not been clarified. Since the development of the central nervous system continues to occur before sexual maturation, the stress of social ranking formation among male individuals in mixed housing conditions can affect postmaturation behavior. To assess these effects, sexually immature mice of different ages were housed in the same cage and a series of behavioral tests were performed after maturation. Results The findings for three groups of mice—junior mice housed with older mice, senior mice housed with younger mice, and mice housed with other mice of the same age—were compared. Junior mice showed higher body weight and activity as well as lower grip strength and anxiety-like behaviors than other mice. In contrast, senior mice showed lower body temperature and increased aggression, antinociceptive effect, and home-cage activity in the dark period in comparison with other mice. Conclusions Thus, combined housing of immature mice of different age groups affects mouse behavior after maturation. Appropriate prematuration housing conditions are crucial to eliminate the uncontrollable bias caused by age-related social stratification.
Collapse
|
12
|
Bylemans T, Vrancken L, Verfaillie K. Developmental Prosopagnosia and Elastic Versus Static Face Recognition in an Incidental Learning Task. Front Psychol 2020; 11:2098. [PMID: 32982859 PMCID: PMC7488957 DOI: 10.3389/fpsyg.2020.02098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 07/28/2020] [Indexed: 11/26/2022] Open
Abstract
Previous research on the beneficial effect of motion has postulated that learning a face in motion provides additional cues to recognition. Surprisingly, however, few studies have examined the beneficial effect of motion in an incidental learning task and developmental prosopagnosia (DP) even though such studies could provide more valuable information about everyday face recognition compared to the perception of static faces. In the current study, 18 young adults (Experiment 1) and five DPs and 10 age-matched controls (Experiment 2) participated in an incidental learning task during which both static and elastically moving unfamiliar faces were sequentially presented and were to be recognized in a delayed visual search task during which the faces could either keep their original presentation or switch (from static to elastically moving or vice versa). In Experiment 1, performance in the elastic-elastic condition reached a significant improvement relative to the elastic-static and static-elastic condition, however, no significant difference could be detected relative to the static-static condition. Except for higher scores in the elastic-elastic compared to the static-elastic condition in the age-matched group, no other significant differences were detected between conditions for both the DPs and the age-matched controls. The current study could not provide compelling evidence for a general beneficial effect of motion. Age-matched controls performed generally worse than DPs, which may potentially be explained by their higher rates of false alarms. Factors that could have influenced the results are discussed.
Collapse
Affiliation(s)
- Tom Bylemans
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Leia Vrancken
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
13
|
Kittler PM, Kim SY, Flory MJ, Phan HTT, Karmel BZ, Gardner JM. Effects of motion and audio-visual redundancy on upright and inverted face and feature preferences in 4-13-month old pre- and full-term NICU graduates. Infant Behav Dev 2020; 60:101439. [PMID: 32438215 PMCID: PMC7671943 DOI: 10.1016/j.infbeh.2020.101439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 02/05/2020] [Accepted: 03/17/2020] [Indexed: 01/01/2023]
Abstract
NICU infants are reported to have diminished social orientation and increased risk of socio-communicative disorders. In this eye tracking study, we used a preference for upright compared to inverted faces as a gauge of social interest in high medical risk full- and pre-term NICU infants. We examined the effects of facial motion and audio-visual redundancy on face and eye/mouth preferences across the first year. Upright and inverted baby faces were simultaneously presented in a paired-preference paradigm with motion and synchronized vocalization varied. NICU risk factors including birth weight, sex, and degree of CNS injury were examined. Overall, infants preferred the more socially salient upright faces, making this the first report, to our knowledge, of an upright compared to inverted face preference among high medical risk NICU infants. Infants with abnormalities on cranial ultrasound displayed lower social interest, i.e. less of a preferential interest in upright faces, when viewing static faces. However, motion selectively increased their upright face looking time to a level equal that of infants in other CNS injury groups. We also observed an age-related sex effect suggesting higher risk in NICU males. Females increased their attention to the mouth in upright faces across the first year, especially between 7-10 months, but males did not. Although vocalization increased diffuse attention toward the screen, contrary to our predictions, there was no evidence that the audio-visual redundancy embodied in a vocalizing face focused additional attention on upright faces or mouths. This unexpected result may suggest a vulnerability in response to talking faces among NICU infants that could potentially affect later verbal and socio-communicative development.
Collapse
Affiliation(s)
- P M Kittler
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities, United States.
| | - S-Y Kim
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities, United States
| | - M J Flory
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities, United States
| | - H T T Phan
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities, United States
| | - B Z Karmel
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities and Department of Pediatrics, Richmond University Medical Center, United States
| | - J M Gardner
- Department of Infant Development, New York State Institute for Basic Research in Developmental Disabilities and Department of Pediatrics, Richmond University Medical Center, United States
| |
Collapse
|
14
|
Lander K, Butcher NL. Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity. Front Psychol 2020; 11:1378. [PMID: 32719634 PMCID: PMC7347903 DOI: 10.3389/fpsyg.2020.01378] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
The accurate recognition of emotion is important for interpersonal interaction and when navigating our social world. However, not all facial displays reflect the emotional experience currently being felt by the expresser. Indeed, faces express both genuine and posed displays of emotion. In this article, we summarize the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expressions and distinguishing between genuine and posed expressions of emotion. We propose that both dynamic information and face familiarity may modulate our ability to determine whether an expression is genuine or not. Finally, we consider the shared role for dynamic information across different face recognition tasks and the wider impact of face familiarity on determining genuine from posed expressions during real-world interactions.
Collapse
Affiliation(s)
- Karen Lander
- Division of Neuroscience and Experimental Psychology, University of Manchester, Manchester, United Kingdom
| | - Natalie L Butcher
- School of Social Sciences, Humanities and Law, Teesside University, Middlesbrough, United Kingdom
| |
Collapse
|
15
|
Amornvit P, Sanohkan S. The Accuracy of Digital Face Scans Obtained from 3D Scanners: An In Vitro Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16245061. [PMID: 31842255 PMCID: PMC6950499 DOI: 10.3390/ijerph16245061] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 12/04/2019] [Accepted: 12/05/2019] [Indexed: 12/19/2022]
Abstract
Face scanners promise wide applications in medicine and dentistry, including facial recognition, capturing facial emotions, facial cosmetic planning and surgery, and maxillofacial rehabilitation. Higher accuracy improves the quality of the data recorded from the face scanner, which ultimately, will improve the outcome. Although there are various face scanners available on the market, there is no evidence of a suitable face scanner for practical applications. The aim of this in vitro study was to analyze the face scans obtained from four scanners; EinScan Pro (EP), EinScan Pro 2X Plus (EP+) (Shining 3D Tech. Co., Ltd. Hangzhou, China), iPhone X (IPX) (Apple Store, Cupertino, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA), and to compare scans obtained from various scanners with the control (measured from Vernier caliper). This should help to identify the appropriate scanner for face scanning. A master face model was created and printed from polylactic acid using the resolution of 200 microns on x, y, and z axes and designed in Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The face models were 3D scanned with four scanners, five times, according to the manufacturer's recommendations; EinScan Pro (Shining 3D Tech. Co., Ltd. Hangzhou, China), EinScan Pro 2X Plus (Shining 3D Tech. Co., Ltd. Hangzhou, China) using Shining Software, iPhone X (Apple Store, Cupertino, CA, USA) using Bellus3D Face Application (Bellus3D, version 1.6.2, Bellus3D, Inc. Campbell, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA). Scan data files were saved as stereolithography (STL) files for the measurements. From the STL files, digital face models are created in the computer using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). Various measurements were measured five times from the reference points in three axes (x, y, and z) using a digital Vernier caliper (VC) (Mitutoyo 150 mm Digital Caliper, Mitutoyo Co., Kanagawa, Japan), and the mean was calculated, which was used as the control. Measurements were measured on the digital face models of EP, EP+, IPX, and PM using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The descriptive statistics were done from SPSS version 20 (IBM Company, Chicago, USA). One-way ANOVA with post hoc using Scheffe was done to analyze the differences between the control and the scans (EP, EP+, IPX, and PM). The significance level was set at p = 0.05. EP+ showed the highest accuracy. EP showed medium accuracy and some lesser accuracy (accurate until 10 mm of length), but IPX and PM showed the least accuracy. EP+ showed accuracy in measuring the 2 mm of depth (diameter 6 mm). All other scanners (EP, IPX, and PM) showed less accuracy in measuring depth. Finally, the accuracy of an optical scan is dependent on the technology used by each scanner. It is recommended to use EP+ for face scanning.
Collapse
|
16
|
Can Salient Stimuli Enhance Responses in Disorders of Consciousness? A Systematic Review. Curr Neurol Neurosci Rep 2019; 19:98. [PMID: 31773300 DOI: 10.1007/s11910-019-1018-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
PURPOSE OF REVIEW Diagnostic classification of patients with disorders of consciousness (DoC) is based on clinician's observation of volitional behaviours. However, patients' caregivers often report higher levels of responsiveness with respect to those observed during the clinical assessment. Thus, increasing efforts have been aimed at comprehending the effects of self-referential and emotional stimuli on patients' responsiveness. Here we systematically reviewed the original experimental studies that compared behavioural and electrophysiological responses with salient vs. neutral material in patients in vegetative state/unresponsive wakefulness syndrome or in minimally conscious state. RECENT FINDINGS Most of the reviewed studies showed that salient stimuli (i.e. patient's own or familiar faces, patient's own name, and familiar voices) seem to elicit a higher amount of behavioural or electrophysiological responses with respect to neutral pictures or sounds. Importantly, a quite high percentage of patients seem to respond to salient stimuli only. The present review could foster use of personally salient stimuli in assessing DoC. However, the low overall quality of evidence and some limitations in the general reviewing process might induce caution in transferring these suggestions into clinical practice.
Collapse
|
17
|
Viccaro E, Sands E, Springer C. Spaced Retrieval Using Static and Dynamic Images to Improve Face-Name Recognition: Alzheimer's Dementia and Vascular Dementia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2019; 28:1184-1197. [PMID: 31194916 DOI: 10.1044/2019_ajslp-18-0131] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The primary objective of this study examined whether spaced retrieval (SR) using dynamic images (video clips without audio) is more effective than SR using static images to improve face-name recognition in persons with dementia. A secondary objective examined the length of time associations were retained after participants reached criterion. A final objective sought to determine if there is a relationship between SR training and dementia diagnosis. Method A repeated-measures design analyzed whether SR using dynamic images was more effective than SR using static images for face-name recognition. Twelve participants diagnosed with Alzheimer's dementia or vascular dementia were randomly assigned to 2 experimental conditions in which the presentation of images was counterbalanced. Results All participants demonstrated improvement in face-name recognition; there was no significant difference between the dynamic and static images. Eleven of 12 participants retained the information from 1 to 4 weeks post training. Additional analysis revealed a significant interaction effect when diagnoses and images were examined together. Participants with vascular dementia demonstrated improved performance using SR with static images, whereas participants with Alzheimer's dementia displayed improved performance using SR with dynamic images. Conclusions SR using static and/or dynamic images improved face-name recognition in persons with dementia. Further research is warranted to continue exploration of the relationship between dementia diagnosis and SR performance using static and dynamic images.
Collapse
Affiliation(s)
- Elizabeth Viccaro
- Department of Communication Sciences and Disorders, Long Island University Post, Brookville, NY
| | - Elaine Sands
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY
| | | |
Collapse
|
18
|
The Frozen Effect: Objects in motion are more aesthetically appealing than objects frozen in time. PLoS One 2019; 14:e0215813. [PMID: 31095600 PMCID: PMC6522023 DOI: 10.1371/journal.pone.0215813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 04/09/2019] [Indexed: 11/20/2022] Open
Abstract
Videos of moving faces are more flattering than static images of the same face, a phenomenon dubbed the Frozen Face Effect. This may reflect an aesthetic preference for faces viewed in a more ecological context than still photographs. In the current set of experiments, we sought to determine whether this effect is unique to facial processing, or if motion confers an aesthetic benefit to other stimulus categories as well, such as bodies and objects—that is, a more generalized ‘Frozen Effect’ (FE). If motion were the critical factor in the FE, we would expect the video of a body or object in motion to be significantly more appealing than when seen in individual, static frames. To examine this, we asked participants to rate sets of videos of bodies and objects in motion along with the still frames constituting each video. Extending the original FFE, we found that participants rated videos as significantly more flattering than each video’s corresponding still images, regardless of stimulus domain, suggesting that the FFE generalizes well beyond face perception. Interestingly, the magnitude of the FE increased with the predictability of stimulus movement. Our results suggest that observers prefer bodies and objects in motion over the same information presented in static form, and the more predictable the motion, the stronger the preference. Motion imbues objects and bodies with greater aesthetic appeal, which has implications for how one might choose to portray oneself in various social media platforms.
Collapse
|
19
|
Quadrelli E, Conte S, Macchi Cassia V, Turati C. Emotion in motion: Facial dynamics affect infants' neural processing of emotions. Dev Psychobiol 2019; 61:843-858. [DOI: 10.1002/dev.21860] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 02/27/2019] [Accepted: 03/17/2019] [Indexed: 01/14/2023]
Affiliation(s)
- Ermanno Quadrelli
- Department of Psychology University of Milano‐Bicocca Milano Italy
- NeuroMI, Milan Center for Neuroscience Milan Italy
| | - Stefania Conte
- Department of Psychology University of Milano‐Bicocca Milano Italy
- NeuroMI, Milan Center for Neuroscience Milan Italy
| | - Viola Macchi Cassia
- Department of Psychology University of Milano‐Bicocca Milano Italy
- NeuroMI, Milan Center for Neuroscience Milan Italy
| | - Chiara Turati
- Department of Psychology University of Milano‐Bicocca Milano Italy
- NeuroMI, Milan Center for Neuroscience Milan Italy
| |
Collapse
|
20
|
Sonne T, Kingo OS, Krøjgaard P. Meaningful Memory? Eighteen-Month-Olds Only Remember Cartoons With a Meaningful Storyline. Front Psychol 2018; 9:2388. [PMID: 30546338 PMCID: PMC6279865 DOI: 10.3389/fpsyg.2018.02388] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/13/2018] [Indexed: 11/13/2022] Open
Abstract
In two studies we investigated the importance of a storyline for remembering cartoons across a delay of 2 weeks in 18-month-old infants by means of the visual paired-comparison (VPC) paradigm. In Study 1 seventy-one 18-month-olds were tested using similar cartoons as in a recent study from our lab while varying the richness of the storyline information. In a VPC task half of the infants watched uncompromised versions of the cartoons used in the recent study (Storyline Condition), whereas the other half watched Pixelized versions of the cartoons (number of pixels reduced by 98% covering up the narrative, but leaving perceptual details, e.g., colors, movements, the same, and Pixelized Condition). Two weeks later they were presented with the familiar cartoon and a novel cartoon from the same version (Storyline or Pixelized) simultaneously, while being eye-tracked. Results showed that only the infants in the Storyline Condition remembered the target cartoon, thus suggesting that the storyline is important for memory. However, an alternative interpretation of the results could be that what made the infants in the Storyline Condition remember the target cartoon was not the storyline, but the static conceptual information of the objects and agents present in the cartoon (which was not visible in the Pixelized version). To test this possibility, a control study was created. In Study 2 thirty-six infants were therefore presented with a version of the cartoon in which we broke down the temporal presentation into 1 s segments and presented these out of order. This was done to preserve the static conceptual information (e.g., objects and agents) while still disturbing the storyline. Results showed that the infants in this condition still did not remember the target cartoon, suggesting that the meaningfulness of the storyline - and not only static conceptual information - is important for later memory.
Collapse
Affiliation(s)
- Trine Sonne
- Department of Psychology and Behavioral Sciences, Center on Autobiographical Memory Research, Aarhus University, Aarhus, Denmark
| | | | | |
Collapse
|
21
|
Trojano L, Moretta P, Masotta O, Loreto V, Estraneo A. Visual pursuit of one's own face in disorders of consciousness: a quantitative analysis. Brain Inj 2018; 32:1549-1555. [PMID: 30059631 DOI: 10.1080/02699052.2018.1504117] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
BACKGROUND Eye behaviour is important to distinguish minimally conscious state (MCS) from vegetative state (VS). OBJECTIVE To search for conditions most suitable to characterize patients in MCS and in VS on quantitative assessment of visual tracking. DESIGN This is a cross-sectional study. PARTICIPANTS In total, 20 patients in VS, 13 in MCS plus and 11 in MCS minus participated in this study. SETTING Neurorehabilitation Unit. METHODS Evaluation of eye behaviour was performed by infrared system; stimuli were represented by a red circle, a picture of a patient's own face and a picture of an unfamiliar face, slowly moving on a personal computer (PC) monitor. Visual tracking on the horizontal and vertical axes was compared. MAIN OUTCOME MEASURES The main outcome measures were proportion of on-target fixations and mean fixation duration. RESULTS The proportion of on-target fixations differed as a function of the stimulus in patients in MCS plus but not in other groups. Own face and unfamiliar face elicited a similar proportion of on-target fixations. Tracking along the horizontal axis was more accurate than that along the vertical axis in patients in both MCS plus and MCS minus. Fixation duration did not differ among the three groups. CONCLUSIONS Horizontal visual tracking of salient stimuli seems particularly suitable for eliciting on-target fixations. Quantitative assessment of visual tracking can complement clinical evaluation for reducing diagnostic uncertainty between patients in MCS or VS.
Collapse
Affiliation(s)
- Luigi Trojano
- a Neuropsychology Lab., Department of Psychology , University of Campania 'Luigi Vanvitelli' , Caserta , Italy
| | - Pasquale Moretta
- b Disorder of Consciousness Lab. , Maugeri Clinical and Scientific Institutes, IRCCS , Telese Terme , BN , Italy
| | - Orsola Masotta
- b Disorder of Consciousness Lab. , Maugeri Clinical and Scientific Institutes, IRCCS , Telese Terme , BN , Italy
| | - Vincenzo Loreto
- b Disorder of Consciousness Lab. , Maugeri Clinical and Scientific Institutes, IRCCS , Telese Terme , BN , Italy
| | - Anna Estraneo
- b Disorder of Consciousness Lab. , Maugeri Clinical and Scientific Institutes, IRCCS , Telese Terme , BN , Italy
| |
Collapse
|
22
|
An Integrated Neural Framework for Dynamic and Static Face Processing. Sci Rep 2018; 8:7036. [PMID: 29728577 PMCID: PMC5935689 DOI: 10.1038/s41598-018-25405-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 04/03/2018] [Indexed: 11/19/2022] Open
Abstract
Faces convey rich information including identity, gender and expression. Current neural models of face processing suggest a dissociation between the processing of invariant facial aspects such as identity and gender, that engage the fusiform face area (FFA) and the processing of changeable aspects, such as expression and eye gaze, that engage the posterior superior temporal sulcus face area (pSTS-FA). Recent studies report a second dissociation within this network such that the pSTS-FA, but not the FFA, shows much stronger response to dynamic than static faces. The aim of the current study was to test a unified model that accounts for these two functional characteristics of the neural face network. In an fMRI experiment, we presented static and dynamic faces while subjects judged an invariant (gender) or a changeable facial aspect (expression). We found that the pSTS-FA was more engaged in processing dynamic than static faces and changeable than invariant aspects, whereas the OFA and FFA showed similar response across all four conditions. These findings support an integrated neural model of face processing in which the ventral areas extract form information from both invariant and changeable facial aspects whereas the dorsal face areas are sensitive to dynamic and changeable facial aspects.
Collapse
|
23
|
Dobs K, Schultz J, Bülthoff I, Gardner JL. Task-dependent enhancement of facial expression and identity representations in human cortex. Neuroimage 2018; 172:689-702. [PMID: 29432802 DOI: 10.1016/j.neuroimage.2018.02.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 02/02/2018] [Accepted: 02/06/2018] [Indexed: 11/24/2022] Open
Abstract
What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.
Collapse
Affiliation(s)
- Katharina Dobs
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Division of Medical Psychology and Department of Psychiatry, University of Bonn, Sigmund Freud Str. 25, 53105 Bonn, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Psychology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| |
Collapse
|
24
|
Bülthoff I, Mohler BJ, Thornton IM. Face recognition of full-bodied avatars by active observers in a virtual environment. Vision Res 2018; 157:242-251. [PMID: 29274811 DOI: 10.1016/j.visres.2017.12.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/01/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning. Half of the learned faces were shown at test in an orientation close to that experienced during learning while the others were viewed from a new viewing angle. All observers found novel views more difficult to recognize than familiar ones. Overall, the active group performed better than both other groups. Furthermore, the group learning faces from static images was the only one to be at chance level in the novel-view condition. These findings suggest that active exploration combined with a dynamic experience of the faces to learn allow for more robust face recognition and point out the value of such techniques for integrating facial visual information and enhancing recognition from novel viewpoints.
Collapse
Affiliation(s)
- Isabelle Bülthoff
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany.
| | - Betty J Mohler
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Ian M Thornton
- Department of Cognitive Science, University of Malta, Malta
| |
Collapse
|
25
|
Minar NJ, Lewkowicz DJ. Overcoming the other-race effect in infancy with multisensory redundancy: 10-12-month-olds discriminate dynamic other-race faces producing speech. Dev Sci 2017; 21:e12604. [PMID: 28944541 DOI: 10.1111/desc.12604] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 07/03/2017] [Indexed: 11/30/2022]
Abstract
We tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.
Collapse
Affiliation(s)
- Nicholas J Minar
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| |
Collapse
|
26
|
Dobs K, Ma WJ, Reddy L. Near-optimal integration of facial form and motion. Sci Rep 2017; 7:11002. [PMID: 28887554 PMCID: PMC5591281 DOI: 10.1038/s41598-017-10885-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Accepted: 08/08/2017] [Indexed: 11/09/2022] Open
Abstract
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects’ identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Collapse
Affiliation(s)
- Katharina Dobs
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France. .,CNRS, UMR 5549, Faculté de Médecine de Purpan, Toulouse, France.
| | - Wei Ji Ma
- New York University, Center for Neural Science and Department of Psychology, New York, New York, USA
| | - Leila Reddy
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, UMR 5549, Faculté de Médecine de Purpan, Toulouse, France
| |
Collapse
|
27
|
Leo I, Angeli V, Lunghi M, Dalla Barba B, Simion F. Newborns' Face Recognition: The Role of Facial Movement. INFANCY 2017. [DOI: 10.1111/infa.12197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Irene Leo
- Department of Developmental Psychology; University of Padova
| | | | - Marco Lunghi
- Department of Developmental Psychology; University of Padova
| | | | - Francesca Simion
- Department of Developmental Psychology; University of Padova
- Center for Cognitive Neuroscience; University of Padova
| |
Collapse
|
28
|
Densten IL, Borrowman L. Does the implicit models of leadership influence the scanning of other-race faces in adults? PLoS One 2017; 12:e0179058. [PMID: 28686605 PMCID: PMC5501397 DOI: 10.1371/journal.pone.0179058] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Accepted: 05/23/2017] [Indexed: 11/18/2022] Open
Abstract
The current study aims to identify the relationships between implicit leadership theoretical (ILT) prototypes / anti-prototype and five facial features (i.e., nasion, upper nose, lower nose, and upper lip) of a leader from a different race than respondents. A sample of 81 Asian respondents viewed a 30-second video of a Caucasian female who in a non-engaging manner talked about her career achievements. As participants watch the video, their eye movements were recorded via an eye tracking devise. While previous research has identified that ILT influences perceptional and attitudinal ratings of leaders, the current study extends these findings by confirming the impact of ILT on the gaze patterns of other race participants, who appear to adopt system one type thinking. This study advances our understanding in how cognitive categories or schemas influence the physicality of individuals (i.e., eye gaze or movements). Finally, this study confirms that individual ILT factors have a relationship with the eye movements of participants and suggests future research directions.
Collapse
Affiliation(s)
- Iain L. Densten
- Monash University Australia Alumni, Melbourne, Australia
- * E-mail:
| | - Luc Borrowman
- Department of Economics, School of Business, Monash University Malaysia, Sunway, Malaysia
| |
Collapse
|
29
|
Richoz AR, Quinn PC, Hillairet de Boisferon A, Berger C, Loevenbruck H, Lewkowicz DJ, Lee K, Dole M, Caldara R, Pascalis O. Audio-Visual Perception of Gender by Infants Emerges Earlier for Adult-Directed Speech. PLoS One 2017; 12:e0169325. [PMID: 28060872 PMCID: PMC5218491 DOI: 10.1371/journal.pone.0169325] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 12/15/2016] [Indexed: 11/18/2022] Open
Abstract
Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, United States of America
| | - Anne Hillairet de Boisferon
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Carole Berger
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Hélène Loevenbruck
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - David J. Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, Massachusetts, United States of America
| | - Kang Lee
- Institute of Child Study University of Toronto, Toronto, Ontario, Canada
| | - Marjorie Dole
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Olivier Pascalis
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| |
Collapse
|
30
|
Liu X, Zhang Y, Lin J, Xia Q, Guo N, Li Q. Social Preference Deficits in Juvenile Zebrafish Induced by Early Chronic Exposure to Sodium Valproate. Front Behav Neurosci 2016; 10:201. [PMID: 27812327 PMCID: PMC5071328 DOI: 10.3389/fnbeh.2016.00201] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 10/04/2016] [Indexed: 01/29/2023] Open
Abstract
Prenatal exposure to sodium valproate (VPA), a widely used anti-epileptic drug, is related to a series of dysfunctions, such as deficits in language and communication. Clinical and animal studies have indicated that the effects of VPA are related to the concentration and to the exposure window, while the neurobehavioral effects of VPA have received limited research attention. In the current study, to analyze the neurobehavioral effects of VPA, zebrafish at 24 h post-fertilization (hpf) were treated with early chronic exposure to 20 μM VPA for 7 h per day for 6 days or with early acute exposure to 100 μM VPA for 7 h. A battery of behavioral screenings was conducted at 1 month of age to investigate social preference, locomotor activity, anxiety, and behavioral response to light change. A social preference deficit was only observed in animals with chronic VPA exposure. Acute VPA exposure induced a change in the locomotor activity, while chronic VPA exposure did not affect locomotor activity. Neither exposure procedure influenced anxiety or the behavioral response to light change. These results suggested that VPA has the potential to affect some behaviors in zebrafish, such as social behavior and the locomotor activity, and that the effects were closely related to the concentration and the exposure window. Additionally, social preference seemed to be independent from other simple behaviors.
Collapse
Affiliation(s)
- Xiuyun Liu
- Translational Medical Center for Development and Disease, Shanghai Key Laboratory of Birth Defect, Institute of Pediatrics, Children's Hospital of Fudan University Shanghai, China
| | - Yinglan Zhang
- Translational Medical Center for Development and Disease, Shanghai Key Laboratory of Birth Defect, Institute of Pediatrics, Children's Hospital of Fudan University Shanghai, China
| | - Jia Lin
- Translational Medical Center for Development and Disease, Shanghai Key Laboratory of Birth Defect, Institute of Pediatrics, Children's Hospital of Fudan University Shanghai, China
| | - Qiaoxi Xia
- Department of Life Sciences, Anhui Science and Technology University Anhui, China
| | - Ning Guo
- Center for Chinese Medical Therapy and Systems Biology, Shanghai University of Traditional Chinese Medicine Shanghai, China
| | - Qiang Li
- Translational Medical Center for Development and Disease, Shanghai Key Laboratory of Birth Defect, Institute of Pediatrics, Children's Hospital of Fudan University Shanghai, China
| |
Collapse
|
31
|
Dobs K, Bülthoff I, Schultz J. Identity information content depends on the type of facial movement. Sci Rep 2016; 6:34301. [PMID: 27683087 PMCID: PMC5041143 DOI: 10.1038/srep34301] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 09/09/2016] [Indexed: 11/09/2022] Open
Abstract
Facial movements convey information about many social cues, including identity. However, how much information about a person's identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, Faculté de Médecine de Purpan, UMR 5549, Toulouse, France
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
32
|
Liu CH, Chen W, Ward J, Takahashi N. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Sci Rep 2016; 6:31001. [PMID: 27499252 PMCID: PMC4976339 DOI: 10.1038/srep31001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 07/11/2016] [Indexed: 11/18/2022] Open
Abstract
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Faculty of Science and Technology Bournemouth University, Talbot Campus Fern Barrow Poole, Dorset, BH12 5BB, United Kingdom
| | - Wenfeng Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing 100101, China
| | - James Ward
- Department of Computer Science, University of Hull, Cottingham Road, Hull, HU6 7RX, United Kingdom
| | - Nozomi Takahashi
- Department of Psychology, Graduate School of Literature and Social Science Nihon University, 3-25-40, Setagaya-ku, Sakurajosui Tokyo 156-8550, Japan
| |
Collapse
|
33
|
Simhi N, Yovel G. The contribution of the body and motion to whole person recognition. Vision Res 2016; 122:12-20. [DOI: 10.1016/j.visres.2016.02.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 02/05/2016] [Accepted: 02/17/2016] [Indexed: 11/28/2022]
|
34
|
Yovel G, O’Toole AJ. Recognizing People in Motion. Trends Cogn Sci 2016; 20:383-395. [DOI: 10.1016/j.tics.2016.02.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 02/18/2016] [Accepted: 02/18/2016] [Indexed: 11/15/2022]
|
35
|
Butcher N, Lander K. Exploring the motion advantage: evaluating the contribution of familiarity and differences in facial motion. Q J Exp Psychol (Hove) 2016; 70:919-929. [PMID: 26822035 DOI: 10.1080/17470218.2016.1138974] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Seeing a face move can improve familiar face recognition, face matching, and learning. More specifically, familiarity with a face may facilitate the learning of an individual's "dynamic facial signature". In the outlined research we examine the relationship between participant ratings of familiarity, the distinctiveness of motion, the amount of facial motion, and the recognition of familiar moving faces (Experiment 1) as well as the magnitude of the motion advantage (Experiment 2). Significant positive correlations were found between all factors. Findings suggest that faces rated as moving a lot and in a distinctive manner benefited the most from being seen in motion. Additionally findings indicate that facial motion information becomes a more important cue to recognition the more familiar a face is, suggesting that "dynamic facial signatures" continue to be learnt over time and integrated within the face representation. Results are discussed in relation to theoretical explanations of the moving face advantage.
Collapse
Affiliation(s)
- Natalie Butcher
- a Social Futures Institute, Teesside University , Middlesbrough , UK
| | - Karen Lander
- b School of Psychological Sciences , University of Manchester , Manchester , UK
| |
Collapse
|
36
|
de Boisferon AH, Dupierrix E, Quinn PC, Lœvenbruck H, Lewkowicz DJ, Lee K, Pascalis O. Perception of Multisensory Gender Coherence in 6- and 9-month-old Infants. INFANCY 2015; 20:661-674. [PMID: 26561475 PMCID: PMC4637175 DOI: 10.1111/infa.12088] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Accepted: 04/27/2015] [Indexed: 11/29/2022]
Abstract
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.
Collapse
Affiliation(s)
| | - Eve Dupierrix
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| | - Hélène Lœvenbruck
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| | - David J. Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, Massachusetts, USA
| | - Kang Lee
- Institute of Child Study, University of Toronto, Toronto, Canada
| | - Olivier Pascalis
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| |
Collapse
|
37
|
Conrad FG, Schober MF, Jans M, Orlowski RA, Nielsen D, Levenstein R. Comprehension and engagement in survey interviews with virtual agents. Front Psychol 2015; 6:1578. [PMID: 26539138 PMCID: PMC4611966 DOI: 10.3389/fpsyg.2015.01578] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Accepted: 09/29/2015] [Indexed: 12/05/2022] Open
Abstract
This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in “face-to-face” interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement—acknowledgments (verbal and visual) and smiles—and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.
Collapse
Affiliation(s)
- Frederick G Conrad
- Michigan Program in Survey Methodology, Institute for Social Research, University of Michigan Ann Arbor, MI, USA ; Joint Program in Survey Methodology, University of Maryland College Park, MD, USA
| | - Michael F Schober
- Department of Psychology, New School for Social Research New York, NY, USA
| | - Matt Jans
- Center for Health Policy Research, University of California at Los Angeles Los Angeles, CA, USA
| | - Rachel A Orlowski
- Department of Epidemiology, School of Public Health, University of Michigan Ann Arbor, MI, USA
| | - Daniel Nielsen
- Department of Biostatistics, Center for Cancer Biostatistics, University of Michigan Medical School Ann Arbor, MI, USA
| | - Rachel Levenstein
- University of Chicago Consortium on Chicago School Research, Urban Education Institute, University of Chicago Chicago, IL, USA
| |
Collapse
|
38
|
Dreosti E, Lopes G, Kampff AR, Wilson SW. Development of social behavior in young zebrafish. Front Neural Circuits 2015; 9:39. [PMID: 26347614 PMCID: PMC4539524 DOI: 10.3389/fncir.2015.00039] [Citation(s) in RCA: 164] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Accepted: 07/23/2015] [Indexed: 12/17/2022] Open
Abstract
Adult zebrafish are robustly social animals whereas larva is not. We designed an assay to determine at what stage of development zebrafish begin to interact with and prefer other fish. One week old zebrafish do not show significant social preference whereas most 3 weeks old zebrafish strongly prefer to remain in a compartment where they can view conspecifics. However, for some individuals, the presence of conspecifics drives avoidance instead of attraction. Social preference is dependent on vision and requires viewing fish of a similar age/size. In addition, over the same 1–3 weeks period larval zebrafish increasingly tend to coordinate their movements, a simple form of social interaction. Finally, social preference and coupled interactions are differentially modified by an NMDAR antagonist and acute exposure to ethanol, both of which are known to alter social behavior in adult zebrafish.
Collapse
Affiliation(s)
- Elena Dreosti
- Department of Cell and Developmental Biology, University College London London, UK
| | - Gonçalo Lopes
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown Lisbon, Portugal
| | - Adam R Kampff
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown Lisbon, Portugal
| | - Stephen W Wilson
- Department of Cell and Developmental Biology, University College London London, UK
| |
Collapse
|
39
|
Attwood AS, Catling JC, Kwong ASF, Munafò MR. Effects of 7.5% carbon dioxide (CO2) inhalation and ethnicity on face memory. Physiol Behav 2015; 147:97-101. [PMID: 25890273 PMCID: PMC4465959 DOI: 10.1016/j.physbeh.2015.04.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2014] [Revised: 04/10/2015] [Accepted: 04/13/2015] [Indexed: 12/03/2022]
Abstract
The ability to accurately verify facial identity has important forensic implications, but this ability is fallible. Research suggests that anxiety at the time of encoding can impair subsequent recall, but no studies have investigated the effects of anxiety at the time of recall in an experimental paradigm. This study addresses this gap using the carbon dioxide (CO2) model of anxiety induction. Thirty participants completed two inhalations: one of 7.5% CO2-enriched air and one of medical air (i.e., placebo). Prior to each inhalation, participants were presented with 16 facial images (50% own-ethnicity, 50% other-ethnicity). During the inhalation they were required to identify which faces had been seen before from a set of 32 images (16 seen-before and 16 novel images). Identification accuracy was lower during CO2 inhalation compared to air (F[1,29] = 5.5, p = .026, ηp2 = .16), and false alarm rate was higher for other-ethnicity faces compared to own-ethnicity faces (F[1,29] = 11.3, p = .002, ηp2 = .28). There was no evidence of gas by ethnicity interactions for accuracy or false alarms (ps > .34). Ratings of decision confidence did not differ by gas condition, suggesting that participants were unaware of differences in performance. These findings suggest that anxiety, at the point of recognition, impairs facial identification accuracy. This has substantial implications for eyewitness memory situations, and suggests that efforts should be made to attenuate the anxiety in these situations in order to improve the validity of identification. Use of carbon dioxide challenge to investigate acute anxiety effects on face memory Investigation of the “own-ethnicity” effect and its interaction with acute anxiety Results show decreased accuracy for face memory during acutely anxious states. Results show increased false identifications when viewing other ethnicity faces. Efforts should be made to attenuate anxiety in eye witness situations.
Collapse
Affiliation(s)
- Angela S Attwood
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, United Kingdom; UK Centre for Tobacco and Alcohol Studies, University of Bristol, United Kingdom; School of Experimental Psychology, University of Bristol, United Kingdom.
| | - Jon C Catling
- School of Psychology, University of Birmingham, United Kingdom
| | - Alex S F Kwong
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, United Kingdom; UK Centre for Tobacco and Alcohol Studies, University of Bristol, United Kingdom; School of Experimental Psychology, University of Bristol, United Kingdom
| | - Marcus R Munafò
- MRC Integrative Epidemiology Unit (IEU) at the University of Bristol, United Kingdom; UK Centre for Tobacco and Alcohol Studies, University of Bristol, United Kingdom; School of Experimental Psychology, University of Bristol, United Kingdom
| |
Collapse
|
40
|
Lander K, Butcher N. Independence of face identity and expression processing: exploring the role of motion. Front Psychol 2015; 6:255. [PMID: 25821441 PMCID: PMC4358059 DOI: 10.3389/fpsyg.2015.00255] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Accepted: 02/20/2015] [Indexed: 11/13/2022] Open
Abstract
According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing emphasize differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude by discussing recent advancements in this area and research questions that still need to be addressed.
Collapse
Affiliation(s)
- Karen Lander
- School of Psychological Sciences, University of Manchester , Manchester, UK
| | - Natalie Butcher
- School of Social Sciences, Business and Law, Teesside University , Middlesbrough, UK
| |
Collapse
|
41
|
Maguinness C, Newell FN. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia. Neuropsychologia 2015; 70:281-95. [PMID: 25737056 DOI: 10.1016/j.neuropsychologia.2015.02.038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 02/11/2015] [Accepted: 02/27/2015] [Indexed: 11/30/2022]
Abstract
There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.
Collapse
Affiliation(s)
- Corrina Maguinness
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| |
Collapse
|
42
|
Abstract
Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consistency with previous studies, supporting the configural view of human face perception. The accuracy of this motion capture technology thus allowed stimuli to be generated that closely resembled real moving faces. Future studies may wish to implement such methodology when studying human face perception.
Collapse
|