1
|
Maguinness C, Schall S, Mathias B, Schoemann M, von Kriegstein K. Prior multisensory learning can facilitate auditory-only voice-identity and speech recognition in noise. Q J Exp Psychol (Hove) 2024:17470218241278649. [PMID: 39164830 DOI: 10.1177/17470218241278649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2024]
Abstract
Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja Schall
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Brian Mathias
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Martin Schoemann
- Chair of Psychological Methods and Cognitive Modelling, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
2
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
3
|
Yeung SC, Sidhu J, Youn S, Schaefer HRH, Barton JJS, Corrow SL. The role of the upper and lower face in the recognition of facial identity in dynamic stimuli. Vision Res 2023; 206:108194. [PMID: 36801665 PMCID: PMC10085847 DOI: 10.1016/j.visres.2023.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/18/2023]
Abstract
Studies with static faces find that upper face halves are more easily recognized than lower face halves-an upper-face advantage. However, faces are usually encountered as dynamic stimuli, and there is evidence that dynamic information influences face identity recognition. This raises the question of whether dynamic faces also show an upper-face advantage. The objective of this study was to examine whether familiarity for recently learned faces was more accurate for upper or lower face halves, and whether this depended upon whether the face was presented as static or dynamic. In Experiment 1, subjects learned a total of 12 faces--6 static images and 6 dynamic video-clips of actors in silent conversation. In experiment 2, subjects learned 12 faces, all dynamic video-clips. During the testing phase of Experiments 1 (between subjects) and 2 (within subjects), subjects were asked to recognize upper and lower face halves from either static images and/or dynamic clips. The data did not provide evidence for a difference in the upper-face advantage between static and dynamic faces. However, in both experiments, we found an upper-face advantage, consistent with prior literature, for female faces, but not for male faces. In conclusion, the use of dynamic stimuli may have little effect on the presence of an upper-face advantage, especially when the static comparison contains a series of static images, rather than a single static image, and is of sufficient image quality. Future studies could investigate the influence of face gender on the presence of an upper-face advantage.
Collapse
Affiliation(s)
- Shanna C Yeung
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jhunam Sidhu
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sena Youn
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Heidi R H Schaefer
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Jason J S Barton
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada
| | - Sherryse L Corrow
- Psychology Department, Bethel University, 3900 Bethel Drive, St Paul, MN 55112, Canada.
| |
Collapse
|
4
|
García AS, Fernández-Sotos P, González P, Navarro E, Rodriguez-Jimenez R, Fernández-Caballero A. Behavioral intention of mental health practitioners toward the adoption of virtual humans in affect recognition training. Front Psychol 2022; 13:934880. [PMCID: PMC9600723 DOI: 10.3389/fpsyg.2022.934880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
This paper explores the key factors influencing mental health professionals' behavioral intention to adopt virtual humans as a means of affect recognition training. Therapies targeting social cognition deficits are in high demand given that these deficits are related to a loss of functioning and quality of life in several neuropsychiatric conditions such as schizophrenia, autism spectrum disorders, affective disorders, and acquired brain injury. Therefore, developing new therapies would greatly improve the quality of life of this large cohort of patients. A questionnaire based on the second revision of the Unified Theory of Acceptance and Use of Technology (UTAUT2) questionnaire was used for this study. One hundred and twenty-four mental health professionals responded to the questionnaire after viewing a video presentation of the system. The results confirmed that mental health professionals showed a positive intention to use virtual reality tools to train affect recognition, as they allow manipulation of social interaction with patients. Further studies should be conducted with therapists from other countries to reach more conclusions.
Collapse
Affiliation(s)
- Arturo S. García
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud Mental, Complejo Hospitalario Universitario de Albacete, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Pascual González
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Elena Navarro
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Roberto Rodriguez-Jimenez
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- Cognición y Psicosis, Area de Neurociencias y Salud Mental, Instituto de Investigación Sanitaria Hospital 12 de Octubre (imas12), Madrid, Spain
- CogPsy-Group, Universidad Complutense de Madrid, Madrid, Spain
| | - Antonio Fernández-Caballero
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- *Correspondence: Antonio Fernández-Caballero
| |
Collapse
|
5
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|
6
|
Maguinness C, von Kriegstein K. Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level. Hum Brain Mapp 2021; 42:3963-3982. [PMID: 34043249 PMCID: PMC8288083 DOI: 10.1002/hbm.25532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 04/26/2021] [Accepted: 05/02/2021] [Indexed: 11/24/2022] Open
Abstract
Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so‐called ‘face‐benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face‐benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face‐sensitive regions while participants recognised the identity of auditory‐only speakers (previously learned by face) in high (SNR −4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face‐benefit in both noise levels, for most participants (16 of 21). In high‐noise, the recognition of face‐learned speakers engaged the right posterior superior temporal sulcus motion‐sensitive face area (pSTS‐mFA), a region implicated in the processing of dynamic facial cues. The face‐benefit in high‐noise also correlated positively with increased functional connectivity between this region and voice‐sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face‐benefit. In low‐noise, the face‐benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS‐mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice‐identity recognition in auditory‐only listening conditions.
Collapse
Affiliation(s)
- Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
7
|
Bylemans T, Vrancken L, Verfaillie K. Developmental Prosopagnosia and Elastic Versus Static Face Recognition in an Incidental Learning Task. Front Psychol 2020; 11:2098. [PMID: 32982859 PMCID: PMC7488957 DOI: 10.3389/fpsyg.2020.02098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 07/28/2020] [Indexed: 11/26/2022] Open
Abstract
Previous research on the beneficial effect of motion has postulated that learning a face in motion provides additional cues to recognition. Surprisingly, however, few studies have examined the beneficial effect of motion in an incidental learning task and developmental prosopagnosia (DP) even though such studies could provide more valuable information about everyday face recognition compared to the perception of static faces. In the current study, 18 young adults (Experiment 1) and five DPs and 10 age-matched controls (Experiment 2) participated in an incidental learning task during which both static and elastically moving unfamiliar faces were sequentially presented and were to be recognized in a delayed visual search task during which the faces could either keep their original presentation or switch (from static to elastically moving or vice versa). In Experiment 1, performance in the elastic-elastic condition reached a significant improvement relative to the elastic-static and static-elastic condition, however, no significant difference could be detected relative to the static-static condition. Except for higher scores in the elastic-elastic compared to the static-elastic condition in the age-matched group, no other significant differences were detected between conditions for both the DPs and the age-matched controls. The current study could not provide compelling evidence for a general beneficial effect of motion. Age-matched controls performed generally worse than DPs, which may potentially be explained by their higher rates of false alarms. Factors that could have influenced the results are discussed.
Collapse
Affiliation(s)
- Tom Bylemans
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Leia Vrancken
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
de Almondes KM, Júnior FWNH, Leonardo MEM, Alves NT. Facial Emotion Recognition and Executive Functions in Insomnia Disorder: An Exploratory Study. Front Psychol 2020; 11:502. [PMID: 32362851 PMCID: PMC7182077 DOI: 10.3389/fpsyg.2020.00502] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 03/02/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Clinical and experimental findings suggest that insomnia is associated with changes in emotional processing and impairments in cognitive functioning. In the present study, we investigate the relationship between facial emotion recognition and executive functioning among individuals with insomnia as well as healthy controls. METHOD A total of 11 individuals (mean age 31.3 ± 9.4) diagnosed with insomnia disorder and 15 control participants (mean age 24.8 ± 4.6) took part in the study. Participants responded to a facial emotion recognition task which presented them with static and dynamic stimuli, and were evaluated with regard to cognition, sleep, and mood. RESULTS Compared to controls, we found that participants with insomnia performed worse in the recognition of the facial emotion of fear (p = 0.001; η p 2 = 0.549; β = 0.999) and had lower scores in tests of verbal comprehension and perceptual organization (104.00 vs. 115.00, U = 135.5; p = 0.004; Cohen's, 2013 d = 1.281). We also found a relationship between facial emotion recognition and performance in cognitive tests, such as those related to perceptual organization, cognitive flexibility, and working memory. CONCLUSION Results suggest that participants with insomnia may present some impairment in executive functions as well as in the recognition of facial emotions with negative valences (fear and sadness).
Collapse
Affiliation(s)
- Katie Moraes de Almondes
- Department of Psychology and Postgraduate Program in Psychobiology, Federal University of Rio Grande do Norte, Natal, Brazil
| | | | - Maria Emanuela Matos Leonardo
- Department of Psychology and Postgraduate Program in Psychobiology, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Nelson Torro Alves
- Department of Psychology, Federal University of Paraíba, João Pessoa, Brazil
| |
Collapse
|
9
|
The Frozen Effect: Objects in motion are more aesthetically appealing than objects frozen in time. PLoS One 2019; 14:e0215813. [PMID: 31095600 PMCID: PMC6522023 DOI: 10.1371/journal.pone.0215813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 04/09/2019] [Indexed: 11/20/2022] Open
Abstract
Videos of moving faces are more flattering than static images of the same face, a phenomenon dubbed the Frozen Face Effect. This may reflect an aesthetic preference for faces viewed in a more ecological context than still photographs. In the current set of experiments, we sought to determine whether this effect is unique to facial processing, or if motion confers an aesthetic benefit to other stimulus categories as well, such as bodies and objects—that is, a more generalized ‘Frozen Effect’ (FE). If motion were the critical factor in the FE, we would expect the video of a body or object in motion to be significantly more appealing than when seen in individual, static frames. To examine this, we asked participants to rate sets of videos of bodies and objects in motion along with the still frames constituting each video. Extending the original FFE, we found that participants rated videos as significantly more flattering than each video’s corresponding still images, regardless of stimulus domain, suggesting that the FFE generalizes well beyond face perception. Interestingly, the magnitude of the FE increased with the predictability of stimulus movement. Our results suggest that observers prefer bodies and objects in motion over the same information presented in static form, and the more predictable the motion, the stronger the preference. Motion imbues objects and bodies with greater aesthetic appeal, which has implications for how one might choose to portray oneself in various social media platforms.
Collapse
|
10
|
Being observed caused physiological stress leading to poorer face recognition. Acta Psychol (Amst) 2019; 196:118-128. [PMID: 31054376 DOI: 10.1016/j.actpsy.2019.04.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 04/10/2019] [Accepted: 04/15/2019] [Indexed: 02/06/2023] Open
Abstract
Being observed when completing physical and mental tasks alters how successful people are at completing them. This has been explained in terms of evaluation apprehension, drive theory, and due to the effects of stress caused by being observed. In three experiments, we explore how being observed affects participants' ability to recognise faces as it relates to the aforementioned theories - easier face recognition tasks should be completed with more success under observation relative to harder tasks. In Experiment 1, we found that being observed during the learning phase of an old/new recognition paradigm caused participants to be less accurate during the test phase than not being observed. Being observed at test did not affect accuracy. We replicated these findings in an line-up type task in Experiment 2. Finally, in Experiment 3, we assessed whether these effects were due to the difficulty of the task or due to the physiological stress being observed caused. We found that while observation caused physiological stress, it did not relate to accuracy. Moderately difficult tasks (upright unfamiliar face recognition and inverted familiar face recognition) were detrimentally affected by being observed, whereas easy (upright familiar face recognition) and difficult tasks (inverted unfamiliar face recognition) were unaffected by this manipulation. We explain these results in terms of the direct effects being observed has on task performance for moderately difficult tasks and discuss the implications of these results to cognitive psychological experimentation.
Collapse
|
11
|
Petrovski S, Rhodes G, Jeffery L. Adaptation to dynamic faces produces face identity aftereffects. J Vis 2018; 18:13. [PMID: 30572341 DOI: 10.1167/18.13.13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Face aftereffects are well established for static stimuli and have been used extensively as a tool for understanding the neural mechanisms underlying face recognition. It has also been argued that adaptive coding, as demonstrated by face aftereffects, plays a functional role in face recognition by calibrating our face norms to reflect current experience. If aftereffects tap high-level perceptual mechanisms that are critically involved in everyday face recognition then they should also occur for moving faces. Here we asked whether face identity aftereffects can be induced using dynamic adaptors. The face identity aftereffect occurs when adaptation to a particular identity (e.g., Dan) biases subsequent perception toward the opposite identity (e.g., antiDan). We adapted participants to video of real faces that displayed either rigid, non-rigid, or no motion and tested for aftereffects in static antifaces. Adapt and test stimuli differed in size, to minimize low-level adaptation. Aftereffects were found in all conditions, suggesting that face identity aftereffects tap high-level mechanisms important for face recognition. Aftereffects were not significantly reduced in the motion conditions relative to the static condition. Overall, our results support the view that face aftereffects reflect adaptation of high-level mechanisms important for real-world face recognition in which faces are moving.
Collapse
Affiliation(s)
- Samantha Petrovski
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Crawley, Western Australia, Australia
| | - Gillian Rhodes
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Crawley, Western Australia, Australia
| | - Linda Jeffery
- ARC Centre of Excellence in Cognition and its Disorders, School of Psychological Science, The University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
12
|
Facial Dynamics Interpreter Network: What Are the Important Relations Between Local Dynamics for Facial Trait Estimation? COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01258-8_29] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
13
|
Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness. Hear Res 2017; 354:64-72. [DOI: 10.1016/j.heares.2017.08.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 08/17/2017] [Accepted: 08/25/2017] [Indexed: 11/23/2022]
|
14
|
Leo I, Angeli V, Lunghi M, Dalla Barba B, Simion F. Newborns' Face Recognition: The Role of Facial Movement. INFANCY 2017. [DOI: 10.1111/infa.12197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Irene Leo
- Department of Developmental Psychology; University of Padova
| | | | - Marco Lunghi
- Department of Developmental Psychology; University of Padova
| | | | - Francesca Simion
- Department of Developmental Psychology; University of Padova
- Center for Cognitive Neuroscience; University of Padova
| |
Collapse
|
15
|
Facial Mobility after Maxilla-Mandibular Advancement in Patients with Severe Obstructive Sleep Apnea Syndrome: A Three-Dimensional Study. Int J Dent 2017; 2017:1574304. [PMID: 28659977 PMCID: PMC5474255 DOI: 10.1155/2017/1574304] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Revised: 05/04/2017] [Accepted: 05/14/2017] [Indexed: 11/18/2022] Open
Abstract
Introduction. The functional results of surgery in terms of facial mobility are key elements in the treatment of patients. Little is actually known about changes in facial mobility following surgical treatment with maxillomandibular advancement (MMA). Objectives. The three-dimensional (3D) methods study of basic facial movements in typical OSAS patients treated with MMA was the topic of the present research. Materials and Methods. Ten patients affected by severe obstructive sleep apnea syndrome (OSAS) were engaged for the study. Their facial surface data was acquired using a 3D laser scanner one week before (T1) and 12 months after (T2) orthognathic surgery. The facial movements were frowning, grimace, smiling, and lip purse. They were described in terms of surface and landmark displacements (mm). The mean landmark displacement was calculated for right and left sides of the face, at T1 and at T2. Results. One year after surgery, facial movements were similar to presurgical registrations. No modifications of symmetry were present. Conclusions. Despite the skeletal maxilla-mandible expansion, orthognathic surgical treatment (MMA) of OSAS patients does not seem to modify facial mobility. Only an enhancement of amplitude in smiling and knitting brows was observed. These results could have reliable medical and surgical applications.
Collapse
|
16
|
Aguiar JSR, De Paiva Silva AI, Rocha Aguiar CS, Torro-Alves N, De Souza WC. A influência da intensidade emocional no reconhecimento de emoções em faces por crianças brasileiras. UNIVERSITAS PSYCHOLOGICA 2017. [DOI: 10.11144/javeriana.upsy15-5.iier] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The ability to recognize emotions in faces is essential to human interaction and occurs since childhood. Hypothesis: research using the morphing technique assume that children require greater or lesser intensity of emotional expression to perceive it. Objective: to examine the emotional recognition of faces in childhood, using a task with emotional intensity variation. Method: it was applied a Test of Facial Emotion Recognition for Children to 28 children between 7 and 11 years, of both sexes, which presented 168 faces manipulated by the morphing technique, of the six basic emotions. Results: age as a trend growth of the likelihood of success at the task; more right answers for happiness and worst performances for fear; and the emotional intensity increasing at 42% the chance of success by every unit of intensity. Conclusion: these findings are relevant because they show the recognition of emotions at different levels as a more sensitive method.
Collapse
|
17
|
Butcher N, Lander K, Jagger R. A search advantage for dynamic same-race and other-race faces. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1262487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Natalie Butcher
- Social Futures Institute, Teesside University, Middlesbrough, UK
| | - Karen Lander
- School of Psychological Sciences, University of Manchester, Manchester, UK
| | - Rachel Jagger
- School of Psychological Sciences, University of Manchester, Manchester, UK
| |
Collapse
|
18
|
Dobs K, Bülthoff I, Schultz J. Identity information content depends on the type of facial movement. Sci Rep 2016; 6:34301. [PMID: 27683087 PMCID: PMC5041143 DOI: 10.1038/srep34301] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 09/09/2016] [Indexed: 11/09/2022] Open
Abstract
Facial movements convey information about many social cues, including identity. However, how much information about a person's identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, Faculté de Médecine de Purpan, UMR 5549, Toulouse, France
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
19
|
The many faces of a face: Comparing stills and videos of facial expressions in eight dimensions (SAVE database). Behav Res Methods 2016; 49:1343-1360. [DOI: 10.3758/s13428-016-0790-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Roark DA, O'Toole AJ, Abdi H, Barrett SE. Learning the Moves: The Effect of Familiarity and Facial Motion on Person Recognition across Large Changes in Viewing Format. Perception 2016; 35:761-73. [PMID: 16836043 DOI: 10.1068/p5503] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.
Collapse
Affiliation(s)
- Dana A Roark
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX 75083-0688, USA.
| | | | | | | |
Collapse
|
21
|
Yovel G, O’Toole AJ. Recognizing People in Motion. Trends Cogn Sci 2016; 20:383-395. [DOI: 10.1016/j.tics.2016.02.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 02/18/2016] [Accepted: 02/18/2016] [Indexed: 11/15/2022]
|
22
|
Butcher N, Lander K. Exploring the motion advantage: evaluating the contribution of familiarity and differences in facial motion. Q J Exp Psychol (Hove) 2016; 70:919-929. [PMID: 26822035 DOI: 10.1080/17470218.2016.1138974] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Seeing a face move can improve familiar face recognition, face matching, and learning. More specifically, familiarity with a face may facilitate the learning of an individual's "dynamic facial signature". In the outlined research we examine the relationship between participant ratings of familiarity, the distinctiveness of motion, the amount of facial motion, and the recognition of familiar moving faces (Experiment 1) as well as the magnitude of the motion advantage (Experiment 2). Significant positive correlations were found between all factors. Findings suggest that faces rated as moving a lot and in a distinctive manner benefited the most from being seen in motion. Additionally findings indicate that facial motion information becomes a more important cue to recognition the more familiar a face is, suggesting that "dynamic facial signatures" continue to be learnt over time and integrated within the face representation. Results are discussed in relation to theoretical explanations of the moving face advantage.
Collapse
Affiliation(s)
- Natalie Butcher
- a Social Futures Institute, Teesside University , Middlesbrough , UK
| | - Karen Lander
- b School of Psychological Sciences , University of Manchester , Manchester , UK
| |
Collapse
|
23
|
Favelle S, Tobin A, Piepers D, Burke D, Robbins RA. Dynamic composite faces are processed holistically. Vision Res 2015; 112:26-32. [DOI: 10.1016/j.visres.2015.05.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Revised: 04/24/2015] [Accepted: 05/06/2015] [Indexed: 11/26/2022]
|
24
|
Tian M, Grill-Spector K. Spatiotemporal information during unsupervised learning enhances viewpoint invariant object recognition. J Vis 2015; 15:7. [PMID: 26024454 DOI: 10.1167/15.6.7] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recognizing objects is difficult because it requires both linking views of an object that can be different and distinguishing objects with similar appearance. Interestingly, people can learn to recognize objects across views in an unsupervised way, without feedback, just from the natural viewing statistics. However, there is intense debate regarding what information during unsupervised learning is used to link among object views. Specifically, researchers argue whether temporal proximity, motion, or spatiotemporal continuity among object views during unsupervised learning is beneficial. Here, we untangled the role of each of these factors in unsupervised learning of novel three-dimensional (3-D) objects. We found that after unsupervised training with 24 object views spanning a 180° view space, participants showed significant improvement in their ability to recognize 3-D objects across rotation. Surprisingly, there was no advantage to unsupervised learning with spatiotemporal continuity or motion information than training with temporal proximity. However, we discovered that when participants were trained with just a third of the views spanning the same view space, unsupervised learning via spatiotemporal continuity yielded significantly better recognition performance on novel views than learning via temporal proximity. These results suggest that while it is possible to obtain view-invariant recognition just from observing many views of an object presented in temporal proximity, spatiotemporal information enhances performance by producing representations with broader view tuning than learning via temporal association. Our findings have important implications for theories of object recognition and for the development of computational algorithms that learn from examples.
Collapse
|
25
|
Maguinness C, Newell FN. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia. Neuropsychologia 2015; 70:281-95. [PMID: 25737056 DOI: 10.1016/j.neuropsychologia.2015.02.038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 02/11/2015] [Accepted: 02/27/2015] [Indexed: 11/30/2022]
Abstract
There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.
Collapse
Affiliation(s)
- Corrina Maguinness
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| |
Collapse
|
26
|
Laurence S, Hole GJ, Hills PJ. Lecturers' faces fatigue their students: Face identity aftereffects for dynamic and static faces. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.950364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
27
|
Ichikawa H, Kanazawa S, Yamaguchi MK. Infants recognize the subtle happiness expression. Perception 2014; 43:235-48. [PMID: 25109015 DOI: 10.1068/p7595] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Facial movement facilitates the recognition of facial expressions. While an intense expression is expressive enough to be recognized in a still image, a subtle expression can be recognized only in motion (Ambadar, Schooler, & Cohn, 2005, Psychological Science, 16, 403-410). The present study investigated whether infants recognize a subtle expression, and whether facial movement facilitates infants' recognition of a subtle expression. In experiment 1 4- to 7-month-old infants were tested for their spontaneous preference for a happy subtle expression rather than a neutral face, but they did not show a spontaneous preference. To confirm that infants did not recognize the static subtle expression, we conducted experiment 2 using the familiarization-novelty procedure. Infants were first familiarized with a static subtle happy expression. Following familiarization, they were presented with a pair of peak expressions of happiness and anger, but showed no significant novelty preference. In experiment 3 we presented the subtle expression dynamically. Infants were familiarized with a dynamic subtle expression and were tested for their novelty preference. The 6- to 7-month-olds showed a significant novelty preference, while 4- to 5-month-olds did not. These results suggest that infants can recognize the subtle expression only in dynamic presentation and that facial movement facilitates infants' recognition of facial expression.
Collapse
|
28
|
Xiao NG, Perrotta S, Quinn PC, Wang Z, Sun YHP, Lee K. On the facilitative effects of face motion on face recognition and its development. Front Psychol 2014; 5:633. [PMID: 25009517 PMCID: PMC4067594 DOI: 10.3389/fpsyg.2014.00633] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 06/04/2014] [Indexed: 11/23/2022] Open
Abstract
For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts.
Collapse
Affiliation(s)
- Naiqi G. Xiao
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| | - Steve Perrotta
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| | - Paul C. Quinn
- Department of Psychology, University of DelawareNewark, DE, USA
| | - Zhe Wang
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
| | - Yu-Hao P. Sun
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
| | - Kang Lee
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| |
Collapse
|
29
|
Fink B, Weege B, Neave N, Ried B, Cardoso Do Lago O. Female Perceptions of Male Body Movements. EVOLUTIONARY PSYCHOLOGY 2014. [DOI: 10.1007/978-1-4939-0314-6_16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
|
30
|
Werner NS, Kühnel S, Markowitsch HJ. The neuroscience of face processing and identification in eyewitnesses and offenders. Front Behav Neurosci 2013; 7:189. [PMID: 24367306 PMCID: PMC3853647 DOI: 10.3389/fnbeh.2013.00189] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2013] [Accepted: 11/18/2013] [Indexed: 12/03/2022] Open
Abstract
Humans are experts in face perception. We are better able to distinguish between the differences of faces and their components than between any other kind of objects. Several studies investigating the underlying neural networks provided evidence for deviated face processing in criminal individuals, although results are often confounded by accompanying mental or addiction disorders. On the other hand, face processing in non-criminal healthy persons can be of high juridical interest in cases of witnessing a felony and afterward identifying a culprit. Memory and therefore recognition of a person can be affected by many parameters and thus become distorted. But also face processing itself is modulated by different factors like facial characteristics, degree of familiarity, and emotional relation. These factors make the comparison of different cases, as well as the transfer of laboratory results to real live settings very challenging. Several neuroimaging studies have been published in recent years and some progress was made connecting certain brain activation patterns with the correct recognition of an individual. However, there is still a long way to go before brain imaging can make a reliable contribution to court procedures.
Collapse
Affiliation(s)
| | - Sina Kühnel
- Physiological Psychology, University of Bielefeld , Bielefeld , Germany
| | | |
Collapse
|
31
|
Xiao NG, Quinn PC, Ge L, Lee K. Elastic facial movement influences part-based but not holistic processing. J Exp Psychol Hum Percept Perform 2013; 39:1457-67. [PMID: 23398253 DOI: 10.1037/a0031631] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Face processing has been studied for decades. However, most of the empirical investigations have been conducted using static face images as stimuli. Little is known about whether static face processing findings can be generalized to real-world contexts in which faces are constantly moving. The present study investigated the nature of face processing (holistic vs. part-based) in elastic moving faces. Specifically, we focused on whether elastic moving faces, as compared with static ones, can facilitate holistic or part-based face processing. Using the composite paradigm, we asked participants to remember either an elastic moving face (i.e., a face that blinks and chews) or a static face, and then tested with a static composite face. The composite effect was (a) significantly smaller in the dynamic condition than in the static condition, (b) consistently found with different face encoding times (Experiments 1-3), and (c) present for the recognition of both upper and lower face parts (Experiment 4). These results suggest that elastic facial motion facilitates part-based processing rather than holistic processing. Thus, whereas previous work with static faces has emphasized an important role for holistic processing, the current work highlights an important role for featural processing with moving faces.
Collapse
Affiliation(s)
- Naiqi G Xiao
- Dr. Eric Jackman Institute of Child Study, University of Toronto
| | | | | | | |
Collapse
|
32
|
Longmore CA, Tree JJ. Motion as a cue to face recognition: evidence from congenital prosopagnosia. Neuropsychologia 2013; 51:864-75. [PMID: 23391556 DOI: 10.1016/j.neuropsychologia.2013.01.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2012] [Revised: 01/21/2013] [Accepted: 01/27/2013] [Indexed: 10/27/2022]
Abstract
Congenital prosopagnosia is a condition that, present from an early age, makes it difficult for an individual to recognise someone from his or her face. Typically, research into prosopagnosia has employed static images that do not contain the extra information we can obtain from moving faces and, as a result, very little is known about the role of facial motion for identity processing in prosopagnosia. Two experiments comparing the performance of four congenital prosopagnosics with that of age matched and younger controls on their ability to learn and recognise (Experiment 1) and match (Experiment 2) novel faces are reported. It was found that younger controls' recognition memory performance increased with dynamic presentation, however only one of the four prosopagnosics showed any improvement. Motion aided matching performance of age matched controls and all prosopagnosics. In addition, the face inversion effect, an effect that tends to be reduced in prosopagnosia, emerged when prosopagnosics matched moving faces. The results suggest that facial motion can be used as a cue to identity, but that this may be a complex and difficult cue to retain. As prosopagnosics performance improved with the dynamic presentation of faces it would appear that prosopagnosics can use motion as a cue to recognition, and the different patterns for the face inversion effect that occurred in the prosopagnosics for static and dynamic faces suggests that the mechanisms used for dynamic facial motion recognition are dissociable from static mechanisms.
Collapse
|
33
|
Bennetts RJ, Kim J, Burke D, Brooks KR, Lucey S, Saragih J, Robbins RA. The Movement Advantage in Famous and Unfamiliar Faces: A Comparison of Point-Light Displays and Shape-Normalised Avatar Stimuli. Perception 2013; 42:950-70. [PMID: 24386715 DOI: 10.1068/p7446] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Facial movement may provide cues to identity, by supporting the extraction of face shape information via structure-from-motion, or via characteristic patterns of movement. Currently, it is unclear whether familiar and unfamiliar faces derive the same benefit from these mechanisms. This study examined the movement advantage by asking participants to match moving and static images of famous and unfamiliar faces to facial point-light displays (PLDs) or shape-normalised avatars in a same/different task (experiment 1). In experiment 2 we also used a same/different task, but participants matched from PLD to PLD or from avatar to avatar. In both experiments, unfamiliar face matching was more accurate for PLDs than for avatars, but there was no effect of stimulus type on famous faces. In experiment 1, there was no movement advantage, but in experiment 2, there was a significant movement advantage for famous and unfamiliar faces. There was no evidence that familiarity increased the movement advantage. For unfamiliar faces, results suggest that participants were relying on characteristic movement patterns to match the faces, and did not derive any extra benefit from the structure-from-motion cues in the PLDs. The results indicate that participants may use static and movement-based cues in a flexible manner when matching famous and unfamiliar faces.
Collapse
Affiliation(s)
- Rachel J Bennetts
- The MARCS Institute, University of Western Sydney, Locked Bag 1797, Penrith, NSW 2751, Australia
| | - Jeesun Kim
- The MARCS Institute, University of Western Sydney, Locked Bag 1797, Penrith, NSW 2751, Australia
| | - Darren Burke
- School of Psychology, University of Newcastle, Science Offices, 10 Chittaway Road, Ourimbah, NSW 2258, Australia
| | - Kevin R Brooks
- Department of Psychology, Macquarie University, NSW 2109, Australia
| | - Simon Lucey
- ICT Centre, CSIRO, 1 Technology Court, Brisbane, QLD 4069, Australia
| | - Jason Saragih
- ICT Centre, CSIRO, 1 Technology Court, Brisbane, QLD 4069, Australia
| | - Rachel A Robbins
- School of Social Sciences and of Psychology, University of Western Sydney, Locked Bag 1797 Penrith, NSW 2751, Australia
| |
Collapse
|
34
|
Piepers DW, Robbins RA. A Review and Clarification of the Terms "holistic," "configural," and "relational" in the Face Perception Literature. Front Psychol 2012; 3:559. [PMID: 23413184 PMCID: PMC3571734 DOI: 10.3389/fpsyg.2012.00559] [Citation(s) in RCA: 126] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Accepted: 11/27/2012] [Indexed: 11/15/2022] Open
Abstract
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face.
Collapse
Affiliation(s)
- Daniel W Piepers
- School of Social Sciences and Psychology, University of Western Sydney Sydney, NSW, Australia
| | | |
Collapse
|
35
|
BARR JEREMIAHR, BOWYER KEVINW, FLYNN PATRICKJ, BISWAS SOMA. FACE RECOGNITION FROM VIDEO: A REVIEW. INT J PATTERN RECOGN 2012. [DOI: 10.1142/s0218001412660024] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Driven by key law enforcement and commercial applications, research on face recognition from video sources has intensified in recent years. The ensuing results have demonstrated that videos possess unique properties that allow both humans and automated systems to perform recognition accurately in difficult viewing conditions. However, significant research challenges remain as most video-based applications do not allow for controlled recordings. In this survey, we categorize the research in this area and present a broad and deep review of recently proposed methods for overcoming the difficulties encountered in unconstrained settings. We also draw connections between the ways in which humans and current algorithms recognize faces. An overview of the most popular and difficult publicly available face video databases is provided to complement these discussions. Finally, we cover key research challenges and opportunities that lie ahead for the field as a whole.
Collapse
Affiliation(s)
- JEREMIAH R. BARR
- Department of Computer Science & Engineering, University of Notre Dame, 384 Fitzpatrick Hall, Notre Dame, Indiana 46556, United States
| | - KEVIN W. BOWYER
- Department of Computer Science & Engineering, University of Notre Dame, 384 Fitzpatrick Hall, Notre Dame, Indiana 46556, United States
| | - PATRICK J. FLYNN
- Department of Computer Science & Engineering, University of Notre Dame, 384 Fitzpatrick Hall, Notre Dame, Indiana 46556, United States
| | - SOMA BISWAS
- Department of Computer Science & Engineering, University of Notre Dame, 384 Fitzpatrick Hall, Notre Dame, Indiana 46556, United States
| |
Collapse
|
36
|
Rigid facial motion influences featural, but not holistic, face processing. Vision Res 2012; 57:26-34. [PMID: 22342561 DOI: 10.1016/j.visres.2012.01.015] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2011] [Revised: 01/18/2012] [Accepted: 01/26/2012] [Indexed: 11/21/2022]
Abstract
We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1-3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target/foil face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1-3, which differed from each other in terms of the display order of the multiple static images or the inter-stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing.
Collapse
|
37
|
He X, Kim J, Barnes N. An face-based visual fixation system for prosthetic vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:2981-2984. [PMID: 23366551 DOI: 10.1109/embc.2012.6346590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Recent studies have shown the success of face recognition using low resolution prosthetic vision, but it requires a zoomed-in and stably-fixated view, which will be challenging for a user with the limited resolution of current prosthetic vision devices. We propose a real-time object detection and tracking system capable of fixating human faces. By integrating both static and temporal information, we are able to improve the robustness of face localization so that it can fixate on faces with large pose variations. Our qualitative and quantitative results demonstrate the viability of supplementing visual prosthetic devices with the ability to visually fixate objects automatically, and provide a stable zoomed-in image stream to facilitate face and expression recognition.
Collapse
|
38
|
Butcher N, Lander K, Fang H, Costen N. The effect of motion at encoding and retrieval for same- and other-race face recognition. Br J Psychol 2011; 102:931-42. [DOI: 10.1111/j.2044-8295.2011.02060.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
39
|
|
40
|
|
41
|
Seeing an unfamiliar face in rotational motion does not aid identity discrimination across viewpoints. Vision Res 2010; 50:854-9. [DOI: 10.1016/j.visres.2010.02.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2010] [Accepted: 02/18/2010] [Indexed: 11/19/2022]
|
42
|
Otsuka Y, Konishi Y, Kanazawa S, Yamaguchi MK, Abdi H, O’Toole AJ. Recognition of Moving and Static Faces by Young Infants. Child Dev 2009; 80:1259-71. [DOI: 10.1111/j.1467-8624.2009.01330.x] [Citation(s) in RCA: 74] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
43
|
Steede LL, Tree JJ, Hole GJ. I can't recognize your face but I can recognize its movement. Cogn Neuropsychol 2008; 24:451-66. [PMID: 18416501 DOI: 10.1080/02643290701381879] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Idiosyncratic facial movements can provide a route to facial identity (review in Roark, Barrett, Spence, Abdi, & O'Toole, 2003). However, it is unclear whether recognizing a face in this way involves the same cognitive or neural mechanisms that are involved in recognizing a static face. Three studies on a developmental prosopagnosic (C.S.) showed that although he is impaired at recognizing static faces, he can discriminate between dynamic identities (Experiments 1a and 1b) and can learn to name individuals on the basis of their idiosyncratic facial movements (Experiment 2), at levels that are comparable to those of matched and undergraduate control groups. These results suggest a possible cognitive dissociation between mechanisms involved in dynamic compared to static face recognition. However, future work is needed to fully understand this dissociation.
Collapse
|
44
|
Simon D, Craig KD, Gosselin F, Belin P, Rainville P. Recognition and discrimination of prototypical dynamic expressions of pain and emotions. Pain 2008; 135:55-64. [PMID: 17583430 DOI: 10.1016/j.pain.2007.05.008] [Citation(s) in RCA: 169] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2007] [Revised: 04/12/2007] [Accepted: 05/02/2007] [Indexed: 11/17/2022]
Abstract
Facial expressions of pain and emotions provide powerful social signals, which impart information about a person's state. Unfortunately, research on pain and emotion expression has been conducted largely in parallel with few bridges allowing for direct comparison of the expressive displays and their impact on observers. Moreover, although facial expressions are highly dynamic, previous research has relied mainly on static photographs. Here we directly compare the recognition and discrimination of dynamic facial expressions of pain and basic emotions by naive observers. One-second film clips were recorded in eight actors displaying neutral facial expressions and expressions of pain and the basic emotions of anger, disgust, fear, happiness, sadness and surprise. Results based on the Facial Action Coding System (FACS) confirmed the distinct (and prototypical) configuration of pain and basic emotion expressions reported in previous studies. Volunteers' evaluations of those dynamic expressions on intensity, arousal and valence demonstrate the high sensitivity and specificity of the observers' judgement. Additional rating data further suggest that, for comparable expression intensity, pain is perceived as more arousing and more unpleasant. This study strongly supports the claim that the facial expression of pain is distinct from the expression of basic emotions. This set of dynamic facial expressions provides unique material to explore the psychological and neurobiological processes underlying the perception of pain expression, its impact on the observer, and its role in the regulation of social behaviour.
Collapse
Affiliation(s)
- Daniela Simon
- Department of Clinical Psychology, Humboldt University of Berlin, Germany.
| | | | | | | | | |
Collapse
|
45
|
Steede LL, Hole GJ. Repetition priming and recognition of dynamic and static chimeras. Perception 2007; 35:1367-82. [PMID: 17214382 DOI: 10.1068/p5515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Chimeric faces, produced by combining the top half of a familiar face with the bottom half of a different familiar face, are difficult to recognise explicitly. However, given that they contain potentially useful configurational and featural information for face recognition, they might nevertheless produce some activation of representations of their constituent faces. Repetition priming with dynamic and static facial chimeras was used to test this possibility. Whereas half-faces produced significant repetition priming of their familiar counterparts, both types of chimera did not. When analyses were restricted to faces that were recognised during the prime phase, repetition priming was both significant, and equivalent, for chimeras and half-faces. The results suggest that the constituents of a facial chimera must be parsed, and recognised, in order for them to cause repetition priming for their familiar counterparts. Facial motion does not help with the parsing of a facial chimera.
Collapse
Affiliation(s)
- Leslie L Steede
- School of Life Sciences, Department of Psychology, Pevensey Building, University of Sussex, Falmer BN1 90H, UK.
| | | |
Collapse
|
46
|
|
47
|
Schwaninger A, Wallraven C, Cunningham DW, Chiller-Glaus SD. Processing of facial identity and expression: a psychophysical, physiological, and computational perspective. PROGRESS IN BRAIN RESEARCH 2006; 156:321-43. [PMID: 17015089 DOI: 10.1016/s0079-6123(06)56018-2] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
A deeper understanding of how the brain processes visual information can be obtained by comparing results from complementary fields such as psychophysics, physiology, and computer science. In this chapter, empirical findings are reviewed with regard to the proposed mechanisms and representations for processing identity and emotion in faces. Results from psychophysics clearly show that faces are processed by analyzing component information (eyes, nose, mouth, etc.) and their spatial relationship (configural information). Results from neuroscience indicate separate neural systems for recognition of identity and facial expression. Computer science offers a deeper understanding of the required algorithms and representations, and provides computational modeling of psychological and physiological accounts. An interdisciplinary approach taking these different perspectives into account provides a promising basis for better understanding and modeling of how the human brain processes visual information for recognition of identity and emotion in faces.
Collapse
Affiliation(s)
- Adrian Schwaninger
- Department of Bülthoff, Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
48
|
Lander K, Humphreys G, Bruce V. Exploring the role of motion in prosopagnosia: recognizing, learning and matching faces. Neurocase 2004; 10:462-70. [PMID: 15788286 DOI: 10.1080/13554790490900761] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
HJA has been completely unable to recognize faces since suffering a stroke some 22 years ago. Previous research has shown that he is poor at judging expressions from static photographs of faces, but performs relatively normally at these judgements when presented with moving point-light patterns (Humphreys et al., 1993). Recent research with non-prosopagnosic participants has suggested a beneficial role for facial motion when recognizing familiar faces and learning new faces. Three experiments are reported that investigate the role of face motion for HJA when recognizing (Experiment 1), learning (Experiment 2) and matching faces (Experiment 3). The results indicate that HJA is unable to use face motion to explicitly recognize faces and is no better at learning names for moving faces than static ones. However, HJA is significantly better at matching moving faces for identity, an opposite pattern to that found with age-matched and undergraduate control participants. We suggest that HJA is not impaired at processing motion information but remains unable to use motion as a cue to identity.
Collapse
Affiliation(s)
- Karen Lander
- Department of Psychology, University of Manchester, Manchester, UK.
| | | | | |
Collapse
|