1
|
Gilad-Gutnick S, Kurian GS, Gupta P, Shah P, Tiwari K, Ralekar C, Gandhi T, Ganesh S, Mathur U, Sinha P. Motion's privilege in recognizing facial expressions following treatment for blindness. Curr Biol 2024:S0960-9822(24)00949-7. [PMID: 39116886 DOI: 10.1016/j.cub.2024.07.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/30/2024] [Accepted: 07/11/2024] [Indexed: 08/10/2024]
Abstract
In his 1872 monograph, Charles Darwin posited that "… the habit of expressing our feelings by certain movements, though now rendered innate, had been in some manner gradually acquired."1 Nearly 150 years later, researchers are still teasing apart innate versus experience-dependent contributions to expression recognition. Indeed, studies have shown that face detection is surprisingly resilient to early visual deprivation,2,3,4,5 pointing to plasticity that extends beyond dogmatic critical periods.6,7,8 However, it remains unclear whether such resilience extends to downstream processing, such as the ability to recognize facial expressions. The extent to which innate versus experience-dependent mechanisms contribute to this ability has yet to be fully explored.9,10,11,12,13 To investigate the impact of early visual experience on facial-expression recognition, we studied children with congenital cataracts who have undergone sight-correcting treatment14,15 and tracked their longitudinal skill acquisition as they gain sight late in life. We introduce and explore two potential facilitators of late-life plasticity: the availability of newborn-like coarse visual acuity prior to treatment16 and the privileged role of motion following treatment.4,17,18 We find that early visual deprivation does not preclude partial acquisition of facial-expression recognition. While rudimentary pretreatment vision is sufficient to allow a low level of expression recognition, it does not facilitate post-treatment improvements. Additionally, only children commencing vision with high visual acuity privilege the use of dynamic cues. We conclude that skipping typical visual experience early in development and introducing high-resolution imagery late in development restricts, but does not preclude, facial-expression skill acquisition and that the representational mechanisms driving this learning differ from those that emerge during typical visual development.
Collapse
Affiliation(s)
- Sharon Gilad-Gutnick
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Massachusetts Avenue, Cambridge, MA 02139, USA.
| | - Grace S Kurian
- University Hospital Centre and University of Lausanne (CHUV), Department of Radiology, Rue de Bugnon, CH-1011 Lausanne, Switzerland
| | - Priti Gupta
- Project Prakash, Dr. Shroff's Charity Eye Hospital, New Delhi 110002, India
| | - Pragya Shah
- Project Prakash, Dr. Shroff's Charity Eye Hospital, New Delhi 110002, India
| | - Kashish Tiwari
- Project Prakash, Dr. Shroff's Charity Eye Hospital, New Delhi 110002, India
| | - Chetan Ralekar
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Massachusetts Avenue, Cambridge, MA 02139, USA
| | - Tapan Gandhi
- Indian Institute of Technology Delhi (IIT Delhi), Department of Electrical Engineering, IIT Delhi Main Rd., New Delhi 110016, India
| | - Suma Ganesh
- Department of Pediatric Ophthalmology, Dr. Shroff's Charity Eye Hospital, New Delhi 110002, India
| | - Umang Mathur
- Department of Pediatric Ophthalmology, Dr. Shroff's Charity Eye Hospital, New Delhi 110002, India
| | - Pawan Sinha
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Massachusetts Avenue, Cambridge, MA 02139, USA
| |
Collapse
|
2
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
3
|
Wang H, Lian Y, Wang A, Chen E, Liu C. Face motion form at learning influences the time course of face spatial frequency processing during test. Biol Psychol 2023; 183:108691. [PMID: 37748703 DOI: 10.1016/j.biopsycho.2023.108691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 09/05/2023] [Accepted: 09/21/2023] [Indexed: 09/27/2023]
Abstract
Studies that use static faces suggest that facial processing follows a coarse-to-fine sequence; i.e., holistic precedes featural processing, due to low and high spatial frequencies (LSF, HSF) transmitting holistic/global and featural/local information respectively. Although recent studies have focused on the role of facial movement in holistic facial processing, it is unclear whether moving faces have the same processing mechanism as static ones, especially in the time course of processing. The current study uses the event-related potential technique to investigate this issue by manipulating the facial format at study and face spatial frequency during the test. ERP results showed that the P1 amplitude was increased by LSF faces relative to HSF ones, using both moving and static study faces, with the former larger than the latter. The N170 amplitude was more sensitive to HSF than LSF faces when only static study faces were used, while the P2 amplitude was more sensitive to LSF faces regardless of the facial study format. The above results were not modulated by the race of the faces. These results favor the view that regardless of face race, moving study faces promote holistic processing during the earliest stage of face recognition. Furthermore, holistic processing is observed to be the same for both static and moving study faces at a later stage associated with more in-depth processing. It is evident that facial motion should be factored into further studies of face recognition, given the distinctions between holistic and featural processing for moving and static study faces.
Collapse
Affiliation(s)
- Hailing Wang
- School of Psychology, Shandong Normal University, Jinan 250358, China.
| | - Yujing Lian
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Anqing Wang
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Enguang Chen
- School of Psychology, Shandong Normal University, Jinan 250358, China
| | - Chengdong Liu
- School of Psychology, Shandong Normal University, Jinan 250358, China
| |
Collapse
|
4
|
Entzmann L, Guyader N, Kauffmann L, Peyrin C, Mermillod M. Detection of emotional faces: The role of spatial frequencies and local features. Vision Res 2023; 211:108281. [PMID: 37421829 DOI: 10.1016/j.visres.2023.108281] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 06/18/2023] [Accepted: 06/28/2023] [Indexed: 07/10/2023]
Abstract
Models of emotion processing suggest that threat-related stimuli such as fearful faces can be detected based on the rapid extraction of low spatial frequencies. However, this remains debated as other models argue that the decoding of facial expressions occurs with a more flexible use of spatial frequencies. The purpose of this study was to clarify the role of spatial frequencies and differences in luminance contrast between spatial frequencies, on the detection of facial emotions. We used a saccadic choice task in which emotional-neutral face pairs were presented and participants were asked to make a saccade toward the neutral or the emotional (happy or fearful) face. Faces were displayed either in low, high, or broad spatial frequencies. Results showed that participants were better to saccade toward the emotional face. They were also better for high or broad than low spatial frequencies, and the accuracy was higher with a happy target. An analysis of the eye and mouth saliency ofour stimuli revealed that the mouth saliency of the target correlates with participants' performance. Overall, this study underlines the importance of local more than global information, and of the saliency of the mouth region in the detection of emotional and neutral faces.
Collapse
Affiliation(s)
- Léa Entzmann
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France; Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France; Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
| | - Nathalie Guyader
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | - Louise Kauffmann
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Carole Peyrin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Martial Mermillod
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| |
Collapse
|
5
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
6
|
Charbonneau I, Guérette J, Cormier S, Blais C, Lalonde-Beaudoin G, Smith FW, Fiset D. The role of spatial frequencies for facial pain categorization. Sci Rep 2021; 11:14357. [PMID: 34257357 PMCID: PMC8277883 DOI: 10.1038/s41598-021-93776-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022] Open
Abstract
Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.
Collapse
Affiliation(s)
- Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Joël Guérette
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Guillaume Lalonde-Beaudoin
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Fraser W Smith
- University of East Anglia School of Psychology, Norwich, NR4 7TJ, UK
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada.
| |
Collapse
|
7
|
Aktürk T, de Graaf TA, Abra Y, Şahoğlu-Göktaş S, Özkan D, Kula A, Güntekin B. Event-related EEG oscillatory responses elicited by dynamic facial expression. Biomed Eng Online 2021; 20:41. [PMID: 33906649 PMCID: PMC8077950 DOI: 10.1186/s12938-021-00882-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 04/20/2021] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Recognition of facial expressions (FEs) plays a crucial role in social interactions. Most studies on FE recognition use static (image) stimuli, even though real-life FEs are dynamic. FE processing is complex and multifaceted, and its neural correlates remain unclear. Transitioning from static to dynamic FE stimuli might help disentangle the neural oscillatory mechanisms underlying face processing and recognition of emotion expression. To our knowledge, we here present the first time-frequency exploration of oscillatory brain mechanisms underlying the processing of dynamic FEs. RESULTS Videos of joyful, fearful, and neutral dynamic facial expressions were presented to 18 included healthy young adults. We analyzed event-related activity in electroencephalography (EEG) data, focusing on the delta, theta, and alpha-band oscillations. Since the videos involved a transition from neutral to emotional expressions (onset around 500 ms), we identified time windows that might correspond to face perception initially (time window 1; first TW), and emotion expression recognition subsequently (around 1000 ms; second TW). First TW showed increased power and phase-locking values for all frequency bands. In the first TW, power and phase-locking values were higher in the delta and theta bands for emotional FEs as compared to neutral FEs, thus potentially serving as a marker for emotion recognition in dynamic face processing. CONCLUSIONS Our time-frequency exploration revealed consistent oscillatory responses to complex, dynamic, ecologically meaningful FE stimuli. We conclude that while dynamic FE processing involves complex network dynamics, dynamic FEs were successfully used to reveal temporally separate oscillation responses related to face processing and subsequently emotion expression recognition.
Collapse
Affiliation(s)
- Tuba Aktürk
- Program of Electroneurophysiology, Vocational School, Istanbul Medipol University, Istanbul, Turkey
- Program of Neuroscience Ph.D, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Tom A de Graaf
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Yasemin Abra
- Department of Biological Sciences, Faculty of Arts and Sciences, Middle East Technical University, Ankara, Turkey
- Institute for Psychology, Faculty of Human Sciences, Universität Der Bundeswehr München, Munich, Germany
- Department of Psychology, Faculty of Psychology and Educational Sciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Sevilay Şahoğlu-Göktaş
- Program of Neuroscience Ph.D, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey
- Regenerative and Restorative Medicine Research Center (REMER), Istanbul Medipol University, Istanbul, Turkey
| | - Dilek Özkan
- Meram Faculty of Medicine, Konya Necmettin Erbakan University, Konya, Turkey
| | - Aysun Kula
- Department of Molecular Biology and Genetics, Faculty of Science, Sivas Cumhuriyet University, Sivas, Turkey
| | - Bahar Güntekin
- Department of Biophysics, School of Medicine, Istanbul Medipol University, Istanbul, Turkey.
- Regenerative and Restorative Medicine Research Center (REMER), Istanbul Medipol University, Istanbul, Turkey.
| |
Collapse
|
8
|
Pinpointing the optimal spatial frequency range for automatic neural facial fear processing. Neuroimage 2020; 221:117151. [PMID: 32673746 DOI: 10.1016/j.neuroimage.2020.117151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/02/2020] [Accepted: 07/05/2020] [Indexed: 11/23/2022] Open
Abstract
Faces convey an assortment of emotional information via low and high spatial frequencies (LSFs and HSFs). However, there is no consensus on the role of particular spatial frequency (SF) information during facial fear processing. Comparison across studies is hampered by the high variability in cut-off values for demarcating the SF spectrum and by differences in task demands. We investigated which SF information is minimally required to rapidly detect briefly presented fearful faces in an implicit and automatic manner, by sweeping through an entire SF range without constraints of predefined cut-offs for LSFs and HSFs. We combined fast periodic visual stimulation with electroencephalography. We presented neutral faces at 6 Hz, periodically interleaved every 5th image with a fearful face, allowing us to quantify an objective neural index of fear discrimination at exactly 1.2 Hz. We started from a stimulus containing either only very low or very high SFs and gradually increased the SF content by adding higher or lower SF information, respectively, to reach the full SF spectrum over the course of 70 s. We found that faces require at least SF information higher than 5.93 cycles per image (cpi) to implicitly differentiate fearful from neutral faces. However, exclusive HSF faces, even in a restricted SF range between 94.82 and 189.63 cpi already carry the critical information to extract the emotional expression of the faces.
Collapse
|