1
|
Garlichs A, Lustig M, Gamer M, Blank H. Expectations guide predictive eye movements and information sampling during face recognition. iScience 2024; 27:110920. [PMID: 39351204 PMCID: PMC11439840 DOI: 10.1016/j.isci.2024.110920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/21/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024] Open
Abstract
Context information has a crucial impact on our ability to recognize faces. Theoretical frameworks of predictive processing suggest that predictions derived from context guide sampling of sensory evidence at informative locations. However, it is unclear how expectations influence visual information sampling during face perception. To investigate the effects of expectations on eye movements during face anticipation and recognition, we conducted two eye-tracking experiments (n = 34, each) using cued face morphs containing expected and unexpected facial features, and clear expected and unexpected faces. Participants performed predictive saccades toward expected facial features and fixated expected more often and longer than unexpected features. In face morphs, expected features attracted early eye movements, followed by unexpected features, indicating that top-down as well as bottom-up information drives face sampling. Our results provide compelling evidence that expectations influence face processing by guiding predictive and early eye movements toward anticipated informative locations, supporting predictive processing.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Mark Lustig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Psychology, University of Hamburg, Hamburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Predictive Cognition, Research Center One Health Ruhr of the University Alliance Ruhr, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
2
|
Jankowski M, Goroncy A. Anatomical variants of acne differ in their impact on social perception. J Eur Acad Dermatol Venereol 2024; 38:1628-1636. [PMID: 38379351 DOI: 10.1111/jdv.19798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/11/2023] [Indexed: 02/22/2024]
Abstract
BACKGROUND Acne negatively affects quality of life, however quality-of-life scores poorly correlate with disease severity scores. Previous research demonstrated existence of facial areas in which skin lesions have greater impact on gaze patterns. Therefore, we hypothesized that anatomical variants of acne may be perceived differently. OBJECTIVES The aim was to investigate effect of anatomical variants of acne on natural gaze patterns and resulting impact on social perception of acne patients. METHODS We tracked eye movements of participants viewing neutral and emotional faces with acne. Images were rated for acne-related visual disturbance, and emotional faces were rated for valence intensity. Respondents of an online survey were asked to rate their perception of pictured individuals' personality traits. RESULTS All faces with acne were perceived as less attractive and received poorer personality judgements with mid-facial acne presenting smallest deviation from healthy faces. T-zone and mixed acne exhibited the least significant difference in respondents gaze behaviour pattern from each other. In addition, there was no significant difference in respondents' grading of acne visual disturbance or ratings for attractiveness, success and trustworthiness. U-zone adult female acne was rated as the most visually disturbing and received the lowest scores for attractiveness. Happy faces with adult female acne were rated as less happy compared to other acne variants and clear-skin faces. CONCLUSIONS Anatomic variants of acne have a distinct impact on gaze patterns and social perception. Adult female acne has the strongest negative effect on recognition of positive emotions in affected individuals, attractiveness ratings and forming social impressions. If perioral acne lesions are absent, frontal lesions determine impact of acne on social perception irrespective of the presence of mid-facial lesions. This perceptive hierarchy should be taken into consideration while deciding treatment goals in acne patients, prioritizing achieving remission in perioral and frontal area.
Collapse
Affiliation(s)
- Marek Jankowski
- Department of Dermatology and Venereology, Faculty of Medicine in Bydgoszcz, Nicolaus Copernicus University, Bydgoszcz, Poland
| | - Agnieszka Goroncy
- Department of Mathematical Statistics and Data Mining, Faculty of Mathematics and Computer Science, Nicolaus Copernicus University in Torun, Torun, Poland
| |
Collapse
|
3
|
Paparelli A, Sokhn N, Stacchi L, Coutrot A, Richoz AR, Caldara R. Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition. Sci Rep 2024; 14:16193. [PMID: 39003314 PMCID: PMC11246522 DOI: 10.1038/s41598-024-66619-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/02/2024] [Indexed: 07/15/2024] Open
Abstract
Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.
Collapse
Affiliation(s)
- Anita Paparelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Antoine Coutrot
- Laboratoire d'Informatique en Image Et Systèmes d'information, French Centre National de La Recherche Scientifique, University of Lyon, Lyon, France
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland.
| |
Collapse
|
4
|
Xu K. Insights into the relationship between eye movements and personality traits in restricted visual fields. Sci Rep 2024; 14:10261. [PMID: 38704441 PMCID: PMC11069522 DOI: 10.1038/s41598-024-60992-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/30/2024] [Indexed: 05/06/2024] Open
Abstract
Previous studies have suggested behavioral patterns, such as visual attention and eye movements, relate to individual personality traits. However, these studies mainly focused on free visual tasks, and the impact of visual field restriction remains inadequately understood. The primary objective of this study is to elucidate the patterns of conscious eye movements induced by visual field restriction and to examine how these patterns relate to individual personality traits. Building on previous research, we aim to gain new insights through two behavioral experiments, unraveling the intricate relationship between visual behaviors and individual personality traits. As a result, both Experiment 1 and Experiment 2 revealed differences in eye movements during free observation and visual field restriction. Particularly, simulation results based on the analyzed data showed clear distinctions in eye movements between free observation and visual field restriction conditions. This suggests that eye movements during free observation involve a mixture of conscious and unconscious eye movements. Furthermore, we observed significant correlations between conscious eye movements and personality traits, with more pronounced effects in the visual field restriction condition used in Experiment 2 compared to Experiment 1. These analytical findings provide a novel perspective on human cognitive processes through visual perception.
Collapse
Affiliation(s)
- Kuangzhe Xu
- Institute for Promotion of Higher Education, Hirosaki University, Aomori, 036-8560, Japan.
| |
Collapse
|
5
|
Azadi R, Lopez E, Taubert J, Patterson A, Afraz A. Inactivation of face-selective neurons alters eye movements when free viewing faces. Proc Natl Acad Sci U S A 2024; 121:e2309906121. [PMID: 38198528 PMCID: PMC10801883 DOI: 10.1073/pnas.2309906121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 10/06/2023] [Indexed: 01/12/2024] Open
Abstract
During free viewing, faces attract gaze and induce specific fixation patterns corresponding to the facial features. This suggests that neurons encoding the facial features are in the causal chain that steers the eyes. However, there is no physiological evidence to support a mechanistic link between face-encoding neurons in high-level visual areas and the oculomotor system. In this study, we targeted the middle face patches of the inferior temporal (IT) cortex in two macaque monkeys using an functional magnetic resonance imaging (fMRI) localizer. We then utilized muscimol microinjection to unilaterally suppress IT neural activity inside and outside the face patches and recorded eye movements while the animals free viewing natural scenes. Inactivation of the face-selective neurons altered the pattern of eye movements on faces: The monkeys found faces in the scene but neglected the eye contralateral to the inactivation hemisphere. These findings reveal the causal contribution of the high-level visual cortex in eye movements.
Collapse
Affiliation(s)
- Reza Azadi
- Unit on Neurons, Circuits and Behavior, Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD20892
| | - Emily Lopez
- Unit on Neurons, Circuits and Behavior, Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD20892
| | - Jessica Taubert
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD20892
- School of Psychology, The University of Queensland, Brisbane, QLD4072, Australia
| | - Amanda Patterson
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD20892
| | - Arash Afraz
- Unit on Neurons, Circuits and Behavior, Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD20892
| |
Collapse
|
6
|
Viktorsson C, Valtakari NV, Falck-Ytter T, Hooge ITC, Rudling M, Hessels RS. Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
Affiliation(s)
- Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Center of Neurodevelopmental Disorders (KIND), Division of Neuropsychiatry, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Maja Rudling
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
7
|
Azadi R, Lopez E, Taubert J, Patterson A, Afraz A. Inactivation of face selective neurons alters eye movements when free viewing faces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.20.544678. [PMID: 37502993 PMCID: PMC10370202 DOI: 10.1101/2023.06.20.544678] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
During free viewing, faces attract gaze and induce specific fixation patterns corresponding to the facial features. This suggests that neurons encoding the facial features are in the causal chain that steers the eyes. However, there is no physiological evidence to support a mechanistic link between face encoding neurons in high-level visual areas and the oculomotor system. In this study, we targeted the middle face patches of inferior temporal (IT) cortex in two macaque monkeys using an fMRI localizer. We then utilized muscimol microinjection to unilaterally suppress IT neural activity inside and outside the face patches and recorded eye movements while the animals free viewing natural scenes. Inactivation of the face selective neurons altered the pattern of eye movements on faces: the monkeys found faces in the scene but neglected the eye contralateral to the inactivation hemisphere. These findings reveal the causal contribution of the high-level visual cortex in eye movements. Significance It has been shown, for more than half a century, that eye movements follow distinctive patterns when free viewing faces. This suggests causal involvement of the face-encoding visual neurons in the eye movements. However, the literature is scant of evidence for this possibility and has focused mostly on the link between low-level image saliency and eye movements. Here, for the first time, we bring causal evidence showing how face-selective neurons in inferior temporal cortex inform and steer eye movements when free viewing faces.
Collapse
|
8
|
Looking at faces in the wild. Sci Rep 2023; 13:783. [PMID: 36646709 PMCID: PMC9842722 DOI: 10.1038/s41598-022-25268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/28/2022] [Indexed: 01/18/2023] Open
Abstract
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic 'dynamic region of interest' approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals' social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention 'in the wild' that opens new avenues for theoretical, applied and clinical research.
Collapse
|
9
|
Stacchi L, Caldara R. Stimulus size modulates idiosyncratic neural face identity discrimination. J Vis 2022; 22:9. [PMID: 36580295 PMCID: PMC9804033 DOI: 10.1167/jov.22.13.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Humans show individual differences in neural facial identity discrimination (FID) responses across viewing positions. Critically, these variations have been shown to be reliable over time and to directly relate to observers' idiosyncratic preferences in facial information sampling. This functional signature in facial identity processing might relate to observer-specific diagnostic information processing. Although these individual differences are a valuable source of information for interpreting data, they can also be difficult to isolate when it is not possible to test many conditions. To address this potential issue, we explored whether reducing stimulus size would help decrease these interindividual variations in neural FID. We manipulated the size of face stimuli (covering 3°, 5°, 6.7°, 8.5°, and 12° of visual angle), as well as the fixation location (left eye, right eye, below the nasion, nose, and mouth) while recording electrophysiological responses. Same identity faces were presented with a base frequency of 6 Hz. Different identity faces were periodically inserted within this sequence to trigger an objective index of neural FID. Our data show robust and consistent individual differences in neural face identity discrimination across viewing positions for all face sizes. Nevertheless, FID was optimal for a larger number of observers when faces subtended 6.7° of visual angle and fixation was below the nasion. This condition is the most suited to reduce natural interindividual variations in neural FID patterns, defining an important benchmark to measure neural FID when it is not possible to assess and control for observers' idiosyncrasies.
Collapse
Affiliation(s)
- Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland,
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland,
| |
Collapse
|
10
|
Li YF, Ying H. Disrupted visual input unveils the computational details of artificial neural networks for face perception. Front Comput Neurosci 2022; 16:1054421. [PMID: 36523327 PMCID: PMC9744930 DOI: 10.3389/fncom.2022.1054421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 11/10/2022] [Indexed: 09/19/2023] Open
Abstract
Background Convolutional Neural Network (DCNN), with its great performance, has attracted attention of researchers from many disciplines. The studies of the DCNN and that of biological neural systems have inspired each other reciprocally. The brain-inspired neural networks not only achieve great performance but also serve as a computational model of biological neural systems. Methods Here in this study, we trained and tested several typical DCNNs (AlexNet, VGG11, VGG13, VGG16, DenseNet, MobileNet, and EfficientNet) with a face ethnicity categorization task for experiment 1, and an emotion categorization task for experiment 2. We measured the performance of DCNNs by testing them with original and lossy visual inputs (various kinds of image occlusion) and compared their performance with human participants. Moreover, the class activation map (CAM) method allowed us to visualize the foci of the "attention" of these DCNNs. Results The results suggested that the VGG13 performed the best: Its performance closely resembled human participants in terms of psychophysics measurements, it utilized similar areas of visual inputs as humans, and it had the most consistent performance with inputs having various kinds of impairments. Discussion In general, we examined the processing mechanism of DCNNs using a new paradigm and found that VGG13 might be the most human-like DCNN in this task. This study also highlighted a possible paradigm to study and develop DCNNs using human perception as a benchmark.
Collapse
Affiliation(s)
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China
| |
Collapse
|
11
|
Linka M, Broda MD, Alsheimer T, de Haas B, Ramon M. Characteristic fixation biases in Super-Recognizers. J Vis 2022; 22:17. [PMID: 35900724 PMCID: PMC9344214 DOI: 10.1167/jov.22.8.17] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/22/2022] [Indexed: 12/14/2022] Open
Abstract
Neurotypical observers show large and reliable individual differences in gaze behavior along several semantic object dimensions. Individual gaze behavior toward faces has been linked to face identity processing, including that of neurotypical observers. Here, we investigated potential gaze biases in Super-Recognizers (SRs), individuals with exceptional face identity processing skills. Ten SRs, identified with a novel conservative diagnostic framework, and 43 controls freely viewed 700 complex scenes depicting more than 5000 objects. First, we tested whether SRs and controls differ in fixation biases along four semantic dimensions: faces, text, objects being touched, and bodies. Second, we tested potential group differences in fixation biases toward eyes and mouths. Finally, we tested whether SRs fixate closer to the theoretical optimal fixation point for face identification. SRs showed a stronger gaze bias toward faces and away from text and touched objects, starting from the first fixation onward. Further, SRs spent a significantly smaller proportion of first fixations and dwell time toward faces on mouths but did not differ in dwell time or first fixations devoted to eyes. Face fixation of SRs also fell significantly closer to the theoretical optimal fixation point for identification, just below the eyes. Our findings suggest that reliable superiority for face identity processing is accompanied by early fixation biases toward faces and preferred saccadic landing positions close to the theoretical optimum for face identification. We discuss future directions to investigate the functional basis of individual fixation behavior and face identity processing ability.
Collapse
Affiliation(s)
- Marcel Linka
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | | | - Tamara Alsheimer
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Meike Ramon
- Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland
| |
Collapse
|
12
|
Breil C, Huestegge L, Böckler A. From eye to arrow: Attention capture by direct gaze requires more than just the eyes. Atten Percept Psychophys 2022; 84:64-75. [PMID: 34729707 PMCID: PMC8794969 DOI: 10.3758/s13414-021-02382-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
Human attention is strongly attracted by direct gaze and sudden onset motion. The sudden direct-gaze effect refers to the processing advantage for targets appearing on peripheral faces that suddenly establish eye contact. Here, we investigate the necessity of social information for attention capture by (sudden onset) ostensive cues. Six experiments involving 204 participants applied (1) naturalistic faces, (2) arrows, (3) schematic eyes, (4) naturalistic eyes, or schematic facial configurations (5) without or (6) with head turn to an attention-capture paradigm. Trials started with two stimuli oriented towards the observer and two stimuli pointing into the periphery. Simultaneous to target presentation, one direct stimulus changed to averted and one averted stimulus changed to direct, yielding a 2 × 2 factorial design with direction and motion cues being absent or present. We replicated the (sudden) direct-gaze effect for photographic faces, but found no corresponding effects in Experiments 2-6. Hence, a holistic and socially meaningful facial context seems vital for attention capture by direct gaze. STATEMENT OF SIGNIFICANCE: The present study highlights the significance of context information for social attention. Our findings demonstrate that the direct-gaze effect, that is, the prioritization of direct gaze over averted gaze, critically relies on the presentation of a meaningful holistic and naturalistic facial context. This pattern of results is evidence in favor of early effects of surrounding social information on attention capture by direct gaze.
Collapse
Affiliation(s)
- Christina Breil
- Julius-Maximilians-University of Würzburg, Würzburg, Germany.
- Department of Psychology III, University of Würzburg, Röntgenring 11, 97070, Würzburg, Germany.
| | - Lynn Huestegge
- Julius-Maximilians-University of Würzburg, Würzburg, Germany
| | - Anne Böckler
- Leibniz University Hannover, Hannover, Germany
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
13
|
Duran N, Atkinson AP. Foveal processing of emotion-informative facial features. PLoS One 2021; 16:e0260814. [PMID: 34855898 PMCID: PMC8638924 DOI: 10.1371/journal.pone.0260814] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.
Collapse
Affiliation(s)
- Nazire Duran
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Anthony P. Atkinson
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
14
|
Holleman GA, Hooge ITC, Huijding J, Deković M, Kemner C, Hessels RS. Gaze and speech behavior in parent–child interactions: The role of conflict and cooperation. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02532-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractA primary mode of human social behavior is face-to-face interaction. In this study, we investigated the characteristics of gaze and its relation to speech behavior during video-mediated face-to-face interactions between parents and their preadolescent children. 81 parent–child dyads engaged in conversations about cooperative and conflictive family topics. We used a dual-eye tracking setup that is capable of concurrently recording eye movements, frontal video, and audio from two conversational partners. Our results show that children spoke more in the cooperation-scenario whereas parents spoke more in the conflict-scenario. Parents gazed slightly more at the eyes of their children in the conflict-scenario compared to the cooperation-scenario. Both parents and children looked more at the other's mouth region while listening compared to while speaking. Results are discussed in terms of the role that parents and children take during cooperative and conflictive interactions and how gaze behavior may support and coordinate such interactions.
Collapse
|
15
|
Kliewer MA, Bagley AR. How to Read an Abdominal CT: Insights from the Visual and Cognitive Sciences Translated for Clinical Practice. Curr Probl Diagn Radiol 2021; 51:639-647. [PMID: 34583872 DOI: 10.1067/j.cpradiol.2021.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 07/01/2021] [Accepted: 07/18/2021] [Indexed: 11/22/2022]
Abstract
When first learning abdominal CT studies, residents are often given little concrete, practical direction. There is, however, a large literature from the visual and cognitive sciences that can provide guidance towards search strategies that maximize efficiency and comprehensiveness. This literature has not penetrated radiology teaching to any great extent. In this article, we will examine the current pedagogy (and why that falls short), why untutored search fails, where misses occur in abdomen/pelvis CT, why these misses occur where they do, how expert radiologists search 3d image stacks, and how novices might expedite the acquisition of expertise.
Collapse
Affiliation(s)
- Mark A Kliewer
- Department of Radiology, University of Wisconsin - Madison, Madison, Wisconsin
| | - Anjuli R Bagley
- Radiology, The University of Colorado - Denver, Department of Radiology, Aurora, CO, USA, University of Colorado Hospital (UCH), Aurora, Colorado
| |
Collapse
|
16
|
Prunty JE, Keemink JR, Kelly DJ. Infants scan static and dynamic facial expressions differently. INFANCY 2021; 26:831-856. [PMID: 34288344 DOI: 10.1111/infa.12426] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 07/02/2021] [Accepted: 07/08/2021] [Indexed: 11/30/2022]
Abstract
Despite being inherently dynamic phenomena, much of our understanding of how infants attend and scan facial expressions is based on static face stimuli. Here we investigate how six-, nine-, and twelve-month infants allocate their visual attention toward dynamic-interactive videos of the six basic emotional expressions, and compare their responses with static images of the same stimuli. We find infants show clear differences in how they attend and scan dynamic and static expressions, looking longer toward the dynamic-face and lower-face regions. Infants across all age groups show differential interest in expressions, and show precise scanning of regions "diagnostic" for emotion recognition. These data also indicate that infants' attention toward dynamic expressions develops over the first year of life, including relative increases in interest and scanning precision toward some negative facial expressions (e.g., anger, fear, and disgust).
Collapse
Affiliation(s)
| | | | - David J Kelly
- School of Psychology, University of Kent, Canterbury, UK
| |
Collapse
|
17
|
Yitzhak N, Pertzov Y, Aviezer H. The elusive link between eye‐movement patterns and facial expression recognition. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2021. [DOI: 10.1111/spc3.12621] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Neta Yitzhak
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| | - Yoni Pertzov
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| | - Hillel Aviezer
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| |
Collapse
|
18
|
The Use of Crowdsourcing Technology to Evaluate Preoperative Severity in Patients With Unilateral Cleft Lip in a Multiethnic Population. J Craniofac Surg 2021; 32:482-485. [PMID: 33704965 DOI: 10.1097/scs.0000000000006917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Crowd sourcing has been used in multiple disciplines to quickly generate large amounts of diverse data. The objective of this study was to use crowdsourcing to grade preoperative severity of unilateral cleft lip phenotype in a multiethnic cohort with the hypothesis that crowdsourcing could efficiently achieve similar rankings compared to expert surgeons. Deidentified preoperative photos were collected for patients with primary, unilateral cleft lip with or without cleft palate (CL ± P). A platform was developed with C-SATS for pairwise comparisons utilizing Elo rankings by crowdsource workers through Amazon Mechanical Turk. Images were independently ranked by 2 senior surgeons for comparison. Seventy-six patients with varying severity of unilateral (CL ± P) phenotype were chosen from Operation Smile missions in Bolivia, Madagascar, Vietnam, and Morocco. Patients were an average of 1.2 years' old, ranging from 3 months to 3.3 years. Each image was compared with 10 others, for a total of 380 unique pairwise comparisons. A total of 4627 total raters participated with a median of 12 raters per pair. Data collection was completed in <20 hours. The crowdsourcing ranking and expert surgeon rankings were highly correlated with Pearson correlation coefficient of R = 0.77 (P = 0.0001). Crowdsourcing provides a rapid and convenient method of obtaining preoperative severity ratings, comparable to expert surgeon assessment, across multiple ethnicities. The method serves as a potential solution to the current lack of rating systems for preoperative severity and overcomes the difficulty of acquiring large-scale assessment from expert surgeons.
Collapse
|
19
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
20
|
Avidan G, Behrmann M. Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia. Annu Rev Vis Sci 2021; 7:301-321. [PMID: 34014762 DOI: 10.1146/annurev-vision-113020-012740] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Congenital prosopagnosia (CP), a life-long impairment in face processing that occurs in the absence of any apparent brain damage, provides a unique model in which to explore the psychological and neural bases of normal face processing. The goal of this review is to offer a theoretical and conceptual framework that may account for the underlying cognitive and neural deficits in CP. This framework may also provide a novel perspective in which to reconcile some conflicting results that permits the expansion of the research in this field in new directions. The crux of this framework lies in linking the known behavioral and neural underpinnings of face processing and their impairments in CP to a model incorporating grid cell-like activity in the entorhinal cortex. Moreover, it stresses the involvement of active, spatial scanning of the environment with eye movements and implicates their critical role in face encoding and recognition. To begin with, we describe the main behavioral and neural characteristics of CP, and then lay down the building blocks of our proposed model, referring to the existing literature supporting this new framework. We then propose testable predictions and conclude with open questions for future research stemming from this model. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Galia Avidan
- Department of Psychology and Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva 8410501, Israel;
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| |
Collapse
|
21
|
Reimann GE, Walsh C, Csumitta KD, McClure P, Pereira F, Martin A, Ramot M. Gauging facial feature viewing preference as a stable individual trait in autism spectrum disorder. Autism Res 2021; 14:1670-1683. [PMID: 34008916 DOI: 10.1002/aur.2540] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 04/15/2021] [Accepted: 04/28/2021] [Indexed: 11/11/2022]
Abstract
Eye tracking provides insights into social processing deficits in autism spectrum disorder (ASD), especially in conjunction with dynamic, naturalistic free-viewing stimuli. However, the question remains whether gaze characteristics, such as preference for specific facial features, can be considered a stable individual trait, particularly in those with ASD. If so, how much data are needed for consistent estimations? To address these questions, we assessed the stability and robustness of gaze preference for facial features as incremental amounts of movie data were introduced for analysis. We trained an artificial neural network to create an object-based segmentation of naturalistic movie clips (14 s each, 7410 frames total). Thirty-three high-functioning individuals with ASD and 36 age- and IQ-equated typically developing individuals (age range: 12-30 years) viewed 22 Hollywood movie clips, each depicting a social interaction. As we evaluated combinations of one, three, five, eight, and 11 movie clips, gaze dwell times on core facial features became increasingly stable at within-subject, within-group, and between-group levels. Using a number of movie clips deemed sufficient by our analysis, we found that individuals with ASD displayed significantly less face-centered gaze (centralized on the nose; p < 0.001) but did not significantly differ from typically developing participants in eye or mouth looking times. Our findings validate gaze preference for specific facial features as a stable individual trait and highlight the possibility of misinterpretation with insufficient data. Additionally, we propose the use of a machine learning approach to stimuli segmentation to quickly and flexibly prepare dynamic stimuli for analysis. LAY SUMMARY: Using a data-driven approach to segmenting movie stimuli, we examined varying amounts of data to assess the stability of social gaze in individuals with autism spectrum disorder (ASD). We found a reduction in social fixations in participants with ASD, driven by decreased attention to the center of the face. Our findings further support the validity of gaze preference for face features as a stable individual trait when sufficient data are used.
Collapse
Affiliation(s)
- Gabrielle E Reimann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | - Catherine Walsh
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | - Kelsey D Csumitta
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | - Patrick McClure
- Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, Maryland, USA
| | - Francisco Pereira
- Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, Maryland, USA
| | - Alex Martin
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | - Michal Ramot
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| |
Collapse
|
22
|
Abstract
The unprecedented efforts to minimize the effects of the COVID-19 pandemic introduce a new arena for human face recognition in which faces are partially occluded with masks. Here, we tested the extent to which face masks change the way faces are perceived. To this end, we evaluated face processing abilities for masked and unmasked faces in a large online sample of adult observers (n = 496) using an adapted version of the Cambridge Face Memory Test, a validated measure of face perception abilities in humans. As expected, a substantial decrease in performance was found for masked faces. Importantly, the inclusion of masks also led to a qualitative change in the way masked faces are perceived. In particular, holistic processing, the hallmark of face perception, was disrupted for faces with masks, as suggested by a reduced inversion effect. Similar changes were found whether masks were included during the study or the test phases of the experiment. Together, we provide novel evidence for quantitative and qualitative alterations in the processing of masked faces that could have significant effects on daily activities and social interactions.
Collapse
|
23
|
Abstract
Gaze-where one looks, how long, and when-plays an essential part in human social behavior. While many aspects of social gaze have been reviewed, there is no comprehensive review or theoretical framework that describes how gaze to faces supports face-to-face interaction. In this review, I address the following questions: (1) When does gaze need to be allocated to a particular region of a face in order to provide the relevant information for successful interaction; (2) How do humans look at other people, and faces in particular, regardless of whether gaze needs to be directed at a particular region to acquire the relevant visual information; (3) How does gaze support the regulation of interaction? The work reviewed spans psychophysical research, observational research, and eye-tracking research in both lab-based and interactive contexts. Based on the literature overview, I sketch a framework for future research based on dynamic systems theory. The framework holds that gaze should be investigated in relation to sub-states of the interaction, encompassing sub-states of the interactors, the content of the interaction as well as the interactive context. The relevant sub-states for understanding gaze in interaction vary over different timescales from microgenesis to ontogenesis and phylogenesis. The framework has important implications for vision science, psychopathology, developmental science, and social robotics.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands.
- Developmental Psychology, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands.
| |
Collapse
|
24
|
Hessels RS, Benjamins JS, van Doorn AJ, Koenderink JJ, Holleman GA, Hooge ITC. Looking behavior and potential human interactions during locomotion. J Vis 2020; 20:5. [PMID: 33007079 PMCID: PMC7545070 DOI: 10.1167/jov.20.10.5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
As humans move through parts of their environment, they meet others that may or may not try to interact with them. Where do people look when they meet others? We had participants wearing an eye tracker walk through a university building. On the way, they encountered nine “walkers.” Walkers were instructed to e.g. ignore the participant, greet him or her, or attempt to hand out a flyer. The participant's gaze was mostly directed to the currently relevant body parts of the walker. Thus, the participants gaze depended on the walker's action. Individual differences in participant's looking behavior were consistent across walkers. Participants who did not respond to the walker seemed to look less at that walker, although this difference was not statistically significant. We suggest that models of gaze allocation should take social motivation into account.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organizational Psychology, Utrecht University, Utrecht, the Netherlands.,
| | - Andrea J van Doorn
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Jan J Koenderink
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Gijs A Holleman
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands.,
| |
Collapse
|
25
|
Holleman GA, Hessels RS, Kemner C, Hooge ITC. Implying social interaction and its influence on gaze behavior to the eyes. PLoS One 2020; 15:e0229203. [PMID: 32092089 PMCID: PMC7039466 DOI: 10.1371/journal.pone.0229203] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 01/31/2020] [Indexed: 11/18/2022] Open
Abstract
Researchers have increasingly focused on how the potential for social interaction modulates basic processes of visual attention and gaze behavior. In this study, we investigated why people may experience social interaction and what factors contributed to their subjective experience. We furthermore investigated whether implying social interaction modulated gaze behavior to people’s faces, specifically the eyes. To imply the potential for interaction, participants received either one of two instructions: 1) they would be presented with a person via a ‘live’ video-feed, or 2) they would be presented with a pre-recorded video clip of a person. Prior to the presentation, a confederate walked into a separate room to suggest to participants that (s)he was being positioned behind a webcam. In fact, all participants were presented with a pre-recorded clip. During the presentation, we measured participants’ gaze behavior with an eye tracker, and after the presentation, participants were asked whether they believed that the confederate was ‘live’ or not, and, why they thought so. Participants varied greatly in their judgements about whether the confederate was ‘live’ or not. Analyses of gaze behavior revealed that a large subset of participants who received the live-instruction gazed less at the eyes of confederates compared with participants who received the pre-recorded-instruction. However, for both the live-instruction group and the pre-recorded instruction group, another subset of participants gazed predominantly at the eyes. The current findings may contribute to the development of experimental designs aimed to capture the interactive aspects of social cognition and visual attention.
Collapse
Affiliation(s)
- Gijs A. Holleman
- Experimental psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
- Developmental psychology, Utrecht University, Utrecht, the Netherlands
- * E-mail:
| | - Roy S. Hessels
- Experimental psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
- Developmental psychology, Utrecht University, Utrecht, the Netherlands
| | - Chantal Kemner
- Experimental psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
- Developmental psychology, Utrecht University, Utrecht, the Netherlands
- Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Ignace T. C. Hooge
- Experimental psychology, Helmholtz Institute, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
26
|
Stacchi L, Liu-Shuang J, Ramon M, Caldara R. Reliability of individual differences in neural face identity discrimination. Neuroimage 2019; 189:468-475. [DOI: 10.1016/j.neuroimage.2019.01.023] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 12/19/2018] [Accepted: 01/09/2019] [Indexed: 11/27/2022] Open
|
27
|
Neural Representations of Faces Are Tuned to Eye Movements. J Neurosci 2019; 39:4113-4123. [PMID: 30867260 DOI: 10.1523/jneurosci.2968-18.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/07/2019] [Accepted: 03/05/2019] [Indexed: 01/23/2023] Open
Abstract
Eye movements provide a functional signature of how human vision is achieved. Many recent studies have consistently reported robust idiosyncratic visual sampling strategies during face recognition. Whether these interindividual differences are mirrored by idiosyncratic neural responses remains unknown. To this aim, we first tracked eye movements of male and female observers during face recognition. Additionally, for every observer we obtained an objective index of neural face discrimination through EEG that was recorded while they fixated different facial information. We found that foveation of facial features fixated longer during face recognition elicited stronger neural face discrimination responses across all observers. This relationship occurred independently of interindividual differences in preferential facial information sampling (e.g., eye vs mouth lookers), and started as early as the first fixation. Our data show that eye movements play a functional role during face processing by providing the neural system with the information that is diagnostic to a specific observer. The effective processing of identity involves idiosyncratic, rather than universal face representations.SIGNIFICANCE STATEMENT When engaging in face recognition, observers deploy idiosyncratic fixation patterns to sample facial information. Whether these individual differences concur with idiosyncratic face-sensitive neural responses remains unclear. To address this issue, we recorded observers' fixation patterns, as well as their neural face discrimination responses elicited during fixation of 10 different locations on the face, corresponding to different types of facial information. Our data reveal a clear interplay between individuals' face-sensitive neural responses and their idiosyncratic eye-movement patterns during identity processing, which emerges as early as the first fixation. Collectively, our findings favor the existence of idiosyncratic, rather than universal face representations.
Collapse
|
28
|
Hessels RS, Holleman GA, Kingstone A, Hooge IT, Kemner C. Gaze allocation in face-to-face communication is affected primarily by task structure and social context, not stimulus-driven factors. Cognition 2019; 184:28-43. [DOI: 10.1016/j.cognition.2018.12.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 11/30/2018] [Accepted: 12/06/2018] [Indexed: 12/28/2022]
|
29
|
Arizpe JM, Noles DL, Tsao JW, Chan AWY. Eye Movement Dynamics Differ between Encoding and Recognition of Faces. Vision (Basel) 2019; 3:vision3010009. [PMID: 31735810 PMCID: PMC6802769 DOI: 10.3390/vision3010009] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 11/15/2018] [Accepted: 12/26/2018] [Indexed: 11/16/2022] Open
Abstract
Facial recognition is widely thought to involve a holistic perceptual process, and optimal recognition performance can be rapidly achieved within two fixations. However, is facial identity encoding likewise holistic and rapid, and how do gaze dynamics during encoding relate to recognition? While having eye movements tracked, participants completed an encoding ("study") phase and subsequent recognition ("test") phase, each divided into blocks of one- or five-second stimulus presentation time conditions to distinguish the influences of experimental phase (encoding/recognition) and stimulus presentation time (short/long). Within the first two fixations, several differences between encoding and recognition were evident in the temporal and spatial dynamics of the eye-movements. Most importantly, in behavior, the long study phase presentation time alone caused improved recognition performance (i.e., longer time at recognition did not improve performance), revealing that encoding is not as rapid as recognition, since longer sequences of eye-movements are functionally required to achieve optimal encoding than to achieve optimal recognition. Together, these results are inconsistent with a scan path replay hypothesis. Rather, feature information seems to have been gradually integrated over many fixations during encoding, enabling recognition that could subsequently occur rapidly and holistically within a small number of fixations.
Collapse
Affiliation(s)
- Joseph M. Arizpe
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Science Applications International Corporation (SAIC), Fort Sam Houston, TX 78234, USA
- Correspondence:
| | - Danielle L. Noles
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- School of Medicine, University of Tennessee Health Science Center, Memphis, TN 38163, USA
| | - Jack W. Tsao
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Anatomy & Neurobiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Memphis Veterans Affairs Medical Center, Memphis, TN 38104, USA
| | - Annie W.-Y. Chan
- Department of Neurology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Children’s Foundation Research Institute, Le Bonheur Children’s Hospital, Memphis, TN 38103, USA
- Department of Radiology, University of Tennessee Health Science Center, Memphis, TN 38163, USA
- Department of Life Sciences, Centre for Cognitive Neuroscience, Division of Psychology, Brunel University London, London, UB8 3PH, UK
| |
Collapse
|
30
|
Davidenko N, Kopalle H, Bridgeman B. The Upper Eye Bias: Rotated Faces Draw Fixations to the Upper Eye. Perception 2018; 48:162-174. [PMID: 30588863 DOI: 10.1177/0301006618819628] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
There is a consistent left-gaze bias when observers fixate upright faces, but it is unknown how this bias manifests in rotated faces, where the two eyes appear at different heights on the face. In two eye-tracking experiments, we measured participants' first and second fixations, while they judged the expressions of upright and rotated faces. We hypothesized that rotated faces might elicit a bias to fixate the upper eye. Our results strongly confirmed this hypothesis, with the upper eye bias completely dominating the left-gaze bias in ±45° faces in Experiment 1, and across a range of face orientations (±11.25°, ±22.5°, ±33.75°, ±45°, and ±90°) in Experiment 2. In addition, rotated faces elicited more overall eye-directed fixations than upright faces. We consider potential mechanisms of the upper eye bias in rotated faces and discuss some implications for research in social cognition.
Collapse
Affiliation(s)
- Nicolas Davidenko
- Department of Psychology, University of California, Santa Cruz, CA, USA
| | - Hema Kopalle
- Department of Neurosciences, University of California, San Diego, CA, USA
| | - Bruce Bridgeman
- Department of Psychology, University of California, Santa Cruz, CA, USA
| |
Collapse
|
31
|
Royer J, Blais C, Charbonneau I, Déry K, Tardif J, Duchaine B, Gosselin F, Fiset D. Greater reliance on the eye region predicts better face recognition ability. Cognition 2018; 181:12-20. [PMID: 30103033 DOI: 10.1016/j.cognition.2018.08.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 08/03/2018] [Accepted: 08/06/2018] [Indexed: 01/17/2023]
Abstract
Interest in using individual differences in face recognition ability to better understand the perceptual and cognitive mechanisms supporting face processing has grown substantially in recent years. The goal of this study was to determine how varying levels of face recognition ability are linked to changes in visual information extraction strategies in an identity recognition task. To address this question, fifty participants completed six tasks measuring face and object processing abilities. Using the Bubbles method (Gosselin & Schyns, 2001), we also measured each individual's use of visual information in face recognition. At the group level, our results replicate previous findings demonstrating the importance of the eye region for face identification. More importantly, we show that face processing ability is related to a systematic increase in the use of the eye area, especially the left eye from the observer's perspective. Indeed, our results suggest that the use of this region accounts for approximately 20% of the variance in face processing ability. These results support the idea that individual differences in face processing are at least partially related to the perceptual extraction strategy used during face identification.
Collapse
Affiliation(s)
- Jessica Royer
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Karine Déry
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Jessica Tardif
- Département de Psychologie, Université de Montréal, Canada
| | - Brad Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, United States
| | | | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada.
| |
Collapse
|
32
|
Abstract
We report the personal eye gaze patterns of people engaged in face-to-face getting acquainted conversation. Considerable differences between individuals are underscored by a stability of eye gaze patterns within individuals. Results suggest the existence of an eye-mouth gaze continuum. This continuum includes some people showing a strong preference for eye gaze, some with a strong preference for mouth gaze, and others distributing their gaze between the eyes and mouth to varying extents. Additionally, we found evidence of within-participant consistency not just for location preference but also for the duration of fixations upon the eye and mouth regions. We also estimate that during a 4-minute getting acquainted conversation mutual face gaze constitutes about 60% of conversation that occurs via typically brief instances of 2.2 seconds. Mutual eye contact ranged from 0-45% of conversation, via very brief instances. This was despite participants subjectively perceiving eye contact occurring for about 70% of conversation. We argue that the subjective perception of eye contact is a product of mutual face gaze instead of actual mutual eye contact. We also outline the fast activity of gaze movements upon various locations both on and off face during a typical face-to-face conversation.
Collapse
|
33
|
Foulsham T, Frost E, Sage L. Stable individual differences predict eye movements to the left, but not handedness or line bisection. Vision Res 2018; 144:38-46. [DOI: 10.1016/j.visres.2018.02.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 01/15/2018] [Accepted: 02/06/2018] [Indexed: 10/17/2022]
|
34
|
Bosten JM, Mollon JD, Peterzell DH, Webster MA. Individual differences as a window into the structure and function of the visual system. Vision Res 2017; 141:1-3. [DOI: 10.1016/j.visres.2017.11.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
35
|
Caldara R. Culture Reveals a Flexible System for Face Processing. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2017. [DOI: 10.1177/0963721417710036] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Nonetheless, a fundamental question remains debated: Is face processing governed by universal perceptual processes? It has long been presumed that this is the case. However, over the past decade, our work at the Eye and Brain Mapping Laboratory has called into question this widely held assumption. We have investigated the eye movements of Western and Eastern observers across various face-processing tasks to determine the effect of culture on perceptual processing. Commonalities aside, we found that Westerners distribute local fixations across the eye and mouth regions, whereas Easterners preferentially deploy central, global fixations during face recognition. Moreover, during the recognition of facial expressions of emotion, Westerners fixate the mouth relatively more to discriminate across expressions, whereas Easterners favor the eye region. Both observations demonstrate that the face system relies on different strategies to perform a range of socially relevant face-processing tasks with comparable levels of efficiency. Overall, these cultural perceptual biases challenge the view that the processes dedicated to face processing are universal, favoring instead the existence of distinct, flexible strategies. The way humans perceive the world and process faces is determined by experience and environmental factors.
Collapse
Affiliation(s)
- Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| |
Collapse
|