1
|
Park SY, Niehorster DC, Huber L, Virányi Z. Examining holistic processing strategies in dogs and humans through gaze behavior. PLoS One 2025; 20:e0317455. [PMID: 39970140 PMCID: PMC11838905 DOI: 10.1371/journal.pone.0317455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 12/27/2024] [Indexed: 02/21/2025] Open
Abstract
Extensive studies have shown that humans process faces holistically, considering not only individual features but also the relationships among them. Knowing where humans and dogs fixate first and the longest when they view faces is highly informative, because the locations can be used to evaluate whether they use a holistic face processing strategy or not. However, the conclusions reported by previous eye-tracking studies appear inconclusive. To address this, we conducted an experiment with humans and dogs, employing experimental settings and analysis methods that can enable direct cross-species comparisons. Our findings reveal that humans, unlike dogs, preferentially fixated on the central region, surrounded by the inner facial features, for both human and dog faces. This pattern was consistent for initial and sustained fixations over seven seconds, indicating a clear tendency towards holistic processing. Although dogs did not show an initial preference for what to look at, their later fixations may suggest holistic processing when viewing faces of their own species. We discuss various potential factors influencing species differences in our results, as well as differences compared to the results of previous studies.
Collapse
Affiliation(s)
- Soon Young Park
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
| | - Zsófia Virányi
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Vienna, Austria
- University of Vienna, Vienna, Austria
- Medical University Vienna, Vienna, Austria
| |
Collapse
|
2
|
Whitham W, Karstadt B, Anderson NC, Bischof WF, Schapiro SJ, Kingstone A, Coss R, Birmingham E, Yorzinski JL. Predator gaze captures both human and chimpanzee attention. PLoS One 2024; 19:e0311673. [PMID: 39570943 PMCID: PMC11581262 DOI: 10.1371/journal.pone.0311673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 09/23/2024] [Indexed: 11/24/2024] Open
Abstract
Primates can rapidly detect potential predators and modify their behavior based on the level of risk. The gaze direction of predators is one feature that primates can use to assess risk levels: recognition of a predator's direct stare indicates to prey that it has been detected and the level of risk is relatively high. Predation has likely shaped visual attention in primates to quickly assess the level of risk but we know little about the constellation of low-level (e.g., contrast, color) and higher-order (e.g., category membership, perceived threat) visual features that primates use to do so. We therefore presented human and chimpanzee (Pan troglodytes) participants with photographs of potential predators (lions) and prey (impala) while we recorded their overt attention with an eye-tracker. The gaze of the predators and prey was either directed or averted. We found that both humans and chimpanzees visually fixated the eyes of predators more than those of prey. In addition, they directed the most attention toward the eyes of directed (rather than averted) predators. Humans, but not chimpanzees, gazed at the eyes of the predators and prey more than other features. Importantly, low-level visual features of the predators and prey did not provide a good explanation of the observed gaze patterns.
Collapse
Affiliation(s)
- Will Whitham
- Department of Ecology and Conservation Biology, Texas A&M University, College Station, Texas, United States of America
- Department of Comparative Medicine, UT MD Anderson Cancer Center, Bastrop, Texas, United States of America
| | - Bradley Karstadt
- Faculty of Education, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Steven J. Schapiro
- Department of Comparative Medicine, UT MD Anderson Cancer Center, Bastrop, Texas, United States of America
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Richard Coss
- Department of Psychology, University of California Davis, Davis, California, United States of America
| | - Elina Birmingham
- Faculty of Education, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Jessica L. Yorzinski
- Department of Ecology and Conservation Biology, Texas A&M University, College Station, Texas, United States of America
| |
Collapse
|
3
|
Šoková B, Baránková M, Halamová J. Fixation patterns in pairs of facial expressions-preferences of self-critical individuals. PeerJ Comput Sci 2024; 10:e2413. [PMID: 39650388 PMCID: PMC11623007 DOI: 10.7717/peerj-cs.2413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 09/23/2024] [Indexed: 12/11/2024]
Abstract
So far, studies have revealed some differences in how long self-critical individuals fixate on specific facial expressions and difficulties in recognising these expressions. However, the research has also indicated a need to distinguish between the different forms of self-criticism (inadequate or hated self), the key underlying factor in psychopathology. Therefore, the aim of the current research was to explore fixation patterns for all seven primary emotions (happiness, sadness, fear, disgust, contempt, anger, and surprise) and the neutral face expression in relation to level of self-criticism by presenting random facial stimuli in the right or left visual field. Based on the previous studies, two groups were defined, and the pattern of fixations and eye movements were compared (high and low inadequate and hated self). The research sample consisted of 120 adult participants, 60 women and 60 men. We used the Forms of Self-Criticizing and Self-Reassuring Scale to measure self-criticism. As stimuli for the eye-tracking task, we used facial expressions from the Umeå University Database of Facial Expressions database. Eye movements were recorded using the Tobii X2 eye tracker. Results showed that in highly self-critical participants with inadequate self, time to first fixation and duration of first fixation was shorter. Respondents with higher inadequate self also exhibited a sustained pattern in fixations (total fixation duration; total fixation duration ratio and average fixation duration)-fixation time increased as self-criticism increased, indicating heightened attention to facial expressions. On the other hand, individuals with high hated self showed increased total fixation duration and fixation count for emotions presented in the right visual field but did not differ in initial fixation metrics in comparison with high inadequate self group. These results suggest that the two forms of self-criticism - inadequate self and hated self, may function as distinct mechanisms in relation to emotional processing, with implications for their role as potential transdiagnostic markers of psychopathology based on the fixation eye-tracking metrics.
Collapse
Affiliation(s)
- Bronislava Šoková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Martina Baránková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Júlia Halamová
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| |
Collapse
|
4
|
Liu M, Zhan J, Wang L. Specified functions of the first two fixations in face recognition: Sampling the general-to-specific facial information. iScience 2024; 27:110686. [PMID: 39246447 PMCID: PMC11378928 DOI: 10.1016/j.isci.2024.110686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 06/14/2024] [Accepted: 08/05/2024] [Indexed: 09/10/2024] Open
Abstract
Visual perception is enacted and constrained by the constantly moving eyes. Although it is well known that the first two fixations are crucial for face recognition, the function of each fixation remains unspecified. Here we demonstrate a central-to-divergent pattern of the two fixations and specify their functions: Fix I clustered along the nose bridge to cover the broad facial information; Fix II diverged to eyes, nostrils, and lips to get the local information. Fix II correlated more than Fix I with the differentiating information between faces and contributed more to recognition responses. While face categories can be significantly discriminated by Fix II's but not Fix I's patterns alone, the combined patterns of the two yield better discrimination. Our results suggest a functional division and collaboration of the two fixations in sampling the general-to-specific facial information and add to understanding visual perception as an active process undertaken by structural motor programs.
Collapse
Affiliation(s)
- Meng Liu
- Institute of Psychology and Behavioral Science, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai 200030, China
- School of Psychology, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
- State Key Laboratory of General Artificial Intelligence (BIGAI), Beijing 100871, China
| | - Lihui Wang
- Institute of Psychology and Behavioral Science, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai 200030, China
- School of Psychology, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| |
Collapse
|
5
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
6
|
Xu K. Insights into the relationship between eye movements and personality traits in restricted visual fields. Sci Rep 2024; 14:10261. [PMID: 38704441 PMCID: PMC11069522 DOI: 10.1038/s41598-024-60992-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 04/30/2024] [Indexed: 05/06/2024] Open
Abstract
Previous studies have suggested behavioral patterns, such as visual attention and eye movements, relate to individual personality traits. However, these studies mainly focused on free visual tasks, and the impact of visual field restriction remains inadequately understood. The primary objective of this study is to elucidate the patterns of conscious eye movements induced by visual field restriction and to examine how these patterns relate to individual personality traits. Building on previous research, we aim to gain new insights through two behavioral experiments, unraveling the intricate relationship between visual behaviors and individual personality traits. As a result, both Experiment 1 and Experiment 2 revealed differences in eye movements during free observation and visual field restriction. Particularly, simulation results based on the analyzed data showed clear distinctions in eye movements between free observation and visual field restriction conditions. This suggests that eye movements during free observation involve a mixture of conscious and unconscious eye movements. Furthermore, we observed significant correlations between conscious eye movements and personality traits, with more pronounced effects in the visual field restriction condition used in Experiment 2 compared to Experiment 1. These analytical findings provide a novel perspective on human cognitive processes through visual perception.
Collapse
Affiliation(s)
- Kuangzhe Xu
- Institute for Promotion of Higher Education, Hirosaki University, Aomori, 036-8560, Japan.
| |
Collapse
|
7
|
Fuchs M, Kersting A, Suslow T, Bodenschatz CM. Recognizing and Looking at Masked Emotional Faces in Alexithymia. Behav Sci (Basel) 2024; 14:343. [PMID: 38667139 PMCID: PMC11047507 DOI: 10.3390/bs14040343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/08/2024] [Accepted: 04/16/2024] [Indexed: 04/29/2024] Open
Abstract
Alexithymia is a clinically relevant personality construct characterized by difficulties identifying and communicating one's emotions and externally oriented thinking. Alexithymia has been found to be related to poor emotion decoding and diminished attention to the eyes. The present eye tracking study investigated whether high levels of alexithymia are related to impairments in recognizing emotions in masked faces and reduced attentional preference for the eyes. An emotion recognition task with happy, fearful, disgusted, and neutral faces with face masks was administered to high-alexithymic and non-alexithymic individuals. Hit rates, latencies of correct responses, and fixation duration on eyes and face mask were analyzed as a function of group and sex. Alexithymia had no effects on accuracy and speed of emotion recognition. However, alexithymic men showed less attentional preference for the eyes relative to the mask than non-alexithymic men, which was due to their increased attention to face masks. No fixation duration differences were observed between alexithymic and non-alexithymic women. Our data indicate that high levels of alexithymia might not have adverse effects on the efficiency of emotion recognition from faces wearing masks. Future research on gaze behavior during facial emotion recognition in high alexithymia should consider sex as a moderating variable.
Collapse
Affiliation(s)
| | | | - Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, 04103 Leipzig, Germany; (M.F.); (A.K.); (C.M.B.)
| | | |
Collapse
|
8
|
Falon SL, Jobson L, Liddell BJ. Does culture moderate the encoding and recognition of negative cues? Evidence from an eye-tracking study. PLoS One 2024; 19:e0295301. [PMID: 38630733 PMCID: PMC11023573 DOI: 10.1371/journal.pone.0295301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 11/20/2023] [Indexed: 04/19/2024] Open
Abstract
Cross-cultural research has elucidated many important differences between people from Western European and East Asian cultural backgrounds regarding how each group encodes and consolidates the contents of complex visual stimuli. While Western European groups typically demonstrate a perceptual bias towards centralised information, East Asian groups favour a perceptual bias towards background information. However, this research has largely focused on the perception of neutral cues and thus questions remain regarding cultural group differences in both the perception and recognition of negative, emotionally significant cues. The present study therefore compared Western European (n = 42) and East Asian (n = 40) participants on a free-viewing task and a subsequent memory task utilising negative and neutral social cues. Attentional deployment to the centralised versus background components of negative and neutral social cues was indexed via eye-tracking, and memory was assessed with a cued-recognition task two days later. While both groups demonstrated an attentional bias towards the centralised components of the neutral cues, only the Western European group demonstrated this bias in the case of the negative cues. There were no significant differences observed between Western European and East Asian groups in terms of memory accuracy, although the Western European group was unexpectedly less sensitive to the centralised components of the negative cues. These findings suggest that culture modulates low-level attentional deployment to negative information, however not higher-level recognition after a temporal interval. This paper is, to our knowledge, the first to concurrently consider the effect of culture on both attentional outcomes and memory for both negative and neutral cues.
Collapse
Affiliation(s)
| | - Laura Jobson
- School of Psychological Sciences, Monash University, Clayton, Australia
| | | |
Collapse
|
9
|
Yamada Y, Shinkawa K, Kobayashi M, Nemoto M, Ota M, Nemoto K, Arai T. Distinct eye movement patterns to complex scenes in Alzheimer's disease and Lewy body disease. Front Neurosci 2024; 18:1333894. [PMID: 38646608 PMCID: PMC11026598 DOI: 10.3389/fnins.2024.1333894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 03/22/2024] [Indexed: 04/23/2024] Open
Abstract
Background Alzheimer's disease (AD) and Lewy body disease (LBD), the two most common causes of neurodegenerative dementia with similar clinical manifestations, both show impaired visual attention and altered eye movements. However, prior studies have used structured tasks or restricted stimuli, limiting the insights into how eye movements alter and differ between AD and LBD in daily life. Objective We aimed to comprehensively characterize eye movements of AD and LBD patients on naturalistic complex scenes with broad categories of objects, which would provide a context closer to real-world free viewing, and to identify disease-specific patterns of altered eye movements. Methods We collected spontaneous viewing behaviors to 200 naturalistic complex scenes from patients with AD or LBD at the prodromal or dementia stage, as well as matched control participants. We then investigated eye movement patterns using a computational visual attention model with high-level image features of object properties and semantic information. Results Compared with matched controls, we identified two disease-specific altered patterns of eye movements: diminished visual exploration, which differentially correlates with cognitive impairment in AD and with motor impairment in LBD; and reduced gaze allocation to objects, attributed to a weaker attention bias toward high-level image features in AD and attributed to a greater image-center bias in LBD. Conclusion Our findings may help differentiate AD and LBD patients and comprehend their real-world visual behaviors to mitigate the widespread impact of impaired visual attention on daily activities.
Collapse
Affiliation(s)
- Yasunori Yamada
- Digital Health, IBM Research, Tokyo, Japan
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | | | - Masatomo Kobayashi
- Digital Health, IBM Research, Tokyo, Japan
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Miyuki Nemoto
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Miho Ota
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Kiyotaka Nemoto
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Tetsuaki Arai
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
10
|
Wang Z, Meghanathan RN, Pollmann S, Wang L. Common structure of saccades and microsaccades in visual perception. J Vis 2024; 24:20. [PMID: 38656530 PMCID: PMC11044844 DOI: 10.1167/jov.24.4.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 02/24/2024] [Indexed: 04/26/2024] Open
Abstract
We obtain large amounts of external information through our eyes, a process often considered analogous to picture mapping onto a camera lens. However, our eyes are never as still as a camera lens, with saccades occurring between fixations and microsaccades occurring within a fixation. Although saccades are agreed to be functional for information sampling in visual perception, it remains unknown if microsaccades have a similar function when eye movement is restricted. Here, we demonstrated that saccades and microsaccades share common spatiotemporal structures in viewing visual objects. Twenty-seven adults viewed faces and houses in free-viewing and fixation-controlled conditions. Both saccades and microsaccades showed distinctive spatiotemporal patterns between face and house viewing that could be discriminated by pattern classifications. The classifications based on saccades and microsaccades could also be mutually generalized. Importantly, individuals who showed more distinctive saccadic patterns between faces and houses also showed more distinctive microsaccadic patterns. Moreover, saccades and microsaccades showed a higher structure similarity for face viewing than house viewing and a common orienting preference for the eye region over the mouth region. These findings suggested a common oculomotor program that is used to optimize information sampling during visual object perception.
Collapse
Affiliation(s)
- Zhenni Wang
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, China
| | | | - Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Lihui Wang
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
11
|
Suslow T, Hoepfel D, Kersting A, Bodenschatz CM. Depressive symptoms and visual attention to others' eyes in healthy individuals. BMC Psychiatry 2024; 24:184. [PMID: 38448877 PMCID: PMC10916197 DOI: 10.1186/s12888-024-05633-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 02/23/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Eye contact is a fundamental part of social interaction. In clinical studies, it has been observed that patients suffering from depression make less eye contact during interviews than healthy individuals, which could be a factor contributing to their social functioning impairments. Similarly, results from mood induction studies with healthy persons indicate that attention to the eyes diminishes as a function of sad mood. The present screen-based eye-tracking study examined whether depressive symptoms in healthy individuals are associated with reduced visual attention to other persons' direct gaze during free viewing. METHODS Gaze behavior of 44 individuals with depressive symptoms and 49 individuals with no depressive symptoms was analyzed in a free viewing task. Grouping was based on the Beck Depression Inventory using the cut-off proposed by Hautzinger et al. (2006). Participants saw pairs of faces with direct gaze showing emotional or neutral expressions. One-half of the face pairs was shown without face masks, whereas the other half was presented with face masks. Participants' dwell times and first fixation durations were analyzed. RESULTS In case of unmasked facial expressions, participants with depressive symptoms looked shorter at the eyes compared to individuals without symptoms across all expression conditions. No group difference in first fixation duration on the eyes of masked and unmasked faces was observed. Individuals with depressive symptoms dwelled longer on the mouth region of unmasked faces. For masked faces, no significant group differences in dwell time on the eyes were found. Moreover, when specifically examining dwell time on the eyes of faces with an emotional expression there were also no significant differences between groups. Overall, participants gazed significantly longer at the eyes in masked compared to unmasked faces. CONCLUSIONS For faces without mask, our results suggest that depressiveness in healthy individuals goes along with less visual attention to other persons' eyes but not with less visual attention to others' faces. When factors come into play that generally amplify the attention directed to the eyes such as face masks or emotions then no relationship between depressiveness and visual attention to the eyes can be established.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Semmelweisstr. 10, 04103, Leipzig, Germany.
| | - Dennis Hoepfel
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Semmelweisstr. 10, 04103, Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Semmelweisstr. 10, 04103, Leipzig, Germany
| | - Charlott Maria Bodenschatz
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Semmelweisstr. 10, 04103, Leipzig, Germany
| |
Collapse
|
12
|
Stosic MD, Helwig S, Ruben MA. More Than Meets the Eyes: Bringing Attention to the Eyes Increases First Impressions of Warmth and Competence. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024; 50:253-269. [PMID: 36259443 DOI: 10.1177/01461672221128114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The present research examined how face masks alter first impressions of warmth and competence for different racial groups. Participants were randomly assigned to view photographs of White, Black, and Asian targets with or without masks. Across four separate studies (total N = 1,012), masked targets were rated significantly higher in warmth and competence compared with unmasked targets, regardless of their race. However, Asian targets benefited the least from being seen masked compared with Black or White targets. Studies 3 and 4 demonstrate how the positive effect of masks is likely due to these clothing garments re-directing attention toward the eyes of the wearer. Participants viewing faces cropped to the eyes (Study 3), or instructed to gaze into the eyes of faces (Study 4), rated these targets similarly to masked targets, and higher than unmasked targets. Neither political affiliation, belief in mask effectiveness, nor explicit racial prejudice moderated any hypothesized effects.
Collapse
Affiliation(s)
| | - Shelby Helwig
- The University of Maine, Orono, ME, USA
- Husson University, Bangor, ME, USA
| | - Mollie A Ruben
- The University of Maine, Orono, ME, USA
- The University of Rhode Island, Kingston, RI, USA
| |
Collapse
|
13
|
Bertucci V, Huang C. Neuromodulator Assessment and Treatment for the Upper Face: An Update. Dermatol Clin 2024; 42:51-62. [PMID: 37977684 DOI: 10.1016/j.det.2023.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Neuromodulator treatment of the upper face has been extensively studied and serves as an excellent tool to enhance facial appearance, non-verbal communication, and social functioning. Optimal outcomes are best achieved when health care providers take an individualized approach, based on knowledge of structural and functional anatomy, thorough facial assessment, and customized injection techniques and patterns.
Collapse
Affiliation(s)
- Vince Bertucci
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada; Private Practice, 100-8333 Weston Road, Woodbridge, Ontario L4L 8E2, Canada.
| | - Christina Huang
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Parimoo S, Choi A, Iafrate L, Grady C, Olsen R. Are older adults susceptible to visual distraction when targets and distractors are spatially separated? NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024; 31:38-74. [PMID: 36059213 DOI: 10.1080/13825585.2022.2117271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 08/22/2022] [Indexed: 06/15/2023]
Abstract
Older adults show preserved memory for previously distracting information due to reduced inhibitory control. In some previous studies, targets and distractors overlap both temporally and spatially. We investigated whether age differences in attentional orienting and disengagement affect recognition memory when targets and distractors are spatially separated at encoding. In Experiments 1 and 2, eye movements were recorded while participants completed an incidental encoding task under covert (i.e., restricted viewing) and overt (i.e., free-viewing) conditions, respectively. The encoding task consisted of pairs of target and distractor item-color stimuli presented in separate visual hemifields. Prior to stimulus onset, a central cue indicated the location of the upcoming target. Participants were subsequently tested on their recognition of the items, their location, and the associated color. In Experiment 3, targets were validly cued on 75% of the encoding trials; on invalid trials, participants had to disengage their attention from the distractor and reorient to the target. Associative memory for colors was reduced among older adults across all experiments, though their location memory was only reduced in Experiment 1. In Experiment 2, older and younger adults directed a similar proportion of fixations toward targets and distractors. Explicit recognition of distractors did not differ between age groups in any of the experiments. However, older adults were slower to correctly recognize distractors than false alarm to novel items in Experiment 2, suggesting some implicit memory for distraction. Together, these results demonstrate that older adults may only be vulnerable to encoding visual distraction when viewing behavior is unconstrained.
Collapse
Affiliation(s)
- Shireen Parimoo
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Anika Choi
- Rotman Research Institute, Toronto, ON, Canada
| | | | - Cheryl Grady
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Rosanna Olsen
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| |
Collapse
|
15
|
Leong BQZ, Estudillo AJ, Hussain Ismail AM. Holistic and featural processing's link to face recognition varies by individual and task. Sci Rep 2023; 13:16869. [PMID: 37803085 PMCID: PMC10558561 DOI: 10.1038/s41598-023-44164-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 10/03/2023] [Indexed: 10/08/2023] Open
Abstract
While it is generally accepted that holistic processing facilitates face recognition, recent studies suggest that poor recognition might also arise from imprecise perception of local features in the face. This study aimed to examine to what extent holistic and featural processing relates to individual differences in face recognition ability (FRA), during face learning (Experiment 1) and face recognition (Experiment 2). Participants performed two tasks: (1) The "Cambridge Face Memory Test-Chinese" which measured participants' FRAs, and (2) an "old/new recognition memory test" encompassing whole faces (preserving holistic and featural processing) and faces revealed through a dynamic aperture (impairing holistic processing but preserving featural processing). Our results showed that participants recognised faces more accurately in conditions when holistic information was preserved, than when it is impaired. We also show that the better use of holistic processing during face learning and face recognition was associated with better FRAs. However, enhanced featural processing during recognition, but not during learning, was related to better FRAs. Together, our findings demonstrate that good face recognition depends on distinct roles played by holistic and featural processing at different stages of face recognition.
Collapse
Affiliation(s)
- Bryan Qi Zheng Leong
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia.
- Department of Psychology, Bournemouth University, Poole House Talbot Campus, Poole, BH12 5BB, UK.
| | - Alejandro J Estudillo
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia.
- Department of Psychology, Bournemouth University, Poole House Talbot Campus, Poole, BH12 5BB, UK.
| | | |
Collapse
|
16
|
Korda Ž, Walcher S, Körner C, Benedek M. Effects of internally directed cognition on smooth pursuit eye movements: A systematic examination of perceptual decoupling. Atten Percept Psychophys 2023; 85:1159-1178. [PMID: 36922477 PMCID: PMC10167146 DOI: 10.3758/s13414-023-02688-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2023] [Indexed: 03/17/2023]
Abstract
Eye behavior differs between internally and externally directed cognition and thus is indicative of an internal versus external attention focus. Recent work implicated perceptual decoupling (i.e., eye behavior becoming less determined by the sensory environment) as one of the key mechanisms involved in these attention-related eye movement differences. However, it is not yet understood how perceptual decoupling depends on the characteristics of the internal task. Therefore, we systematically examined effects of varying internal task demands on smooth pursuit eye movements. Specifically, we evaluated effects of the internal workload (control vs. low vs. high) and of internal task (arithmetic vs. visuospatial). The results of multilevel modelling showed that effects of perceptual decoupling were stronger for higher workload, and more pronounced for the visuospatial modality. Effects also followed a characteristic time-course relative to internal operations. The findings provide further support of the perceptual decoupling mechanism by showing that it is sensitive to the degree of interference between external and internal information.
Collapse
Affiliation(s)
- Živa Korda
- Department of Psychology, University of Graz, Graz, Austria.
| | - Sonja Walcher
- Department of Psychology, University of Graz, Graz, Austria
| | | | | |
Collapse
|
17
|
Or CCF, Ng KYJ, Chia Y, Koh JH, Lim DY, Lee ALF. Face masks are less effective than sunglasses in masking face identity. Sci Rep 2023; 13:4284. [PMID: 36922579 PMCID: PMC10015138 DOI: 10.1038/s41598-023-31321-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 03/09/2023] [Indexed: 03/18/2023] Open
Abstract
The effect of covering faces on face identification is recently garnering interest amid the COVID-19 pandemic. Here, we investigated how face identification performance was affected by two types of face disguise: sunglasses and face masks. Observers studied a series of faces; then judged whether a series of test faces, comprising studied and novel faces, had been studied before or not. Face stimuli were presented either without coverings (full faces), wearing sunglasses covering the upper region (eyes, eyebrows), or wearing surgical masks covering the lower region (nose, mouth, chin). We found that sunglasses led to larger reductions in sensitivity (d') to face identity than face masks did, while both disguises increased the tendency to report faces as studied before, a bias that was absent for full faces. In addition, faces disguised during either study or test only (i.e. study disguised faces, test with full faces; and vice versa) led to further reductions in sensitivity from both studying and testing with disguised faces, suggesting that congruence between study and test is crucial for memory retrieval. These findings implied that the upper region of the face, including the eye-region features, is more diagnostic for holistic face-identity processing than the lower face region.
Collapse
Affiliation(s)
- Charles C-F Or
- Division of Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore, 639818, Singapore.
| | - Kester Y J Ng
- Division of Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore, 639818, Singapore
| | - Yiik Chia
- Division of Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore, 639818, Singapore
| | - Jing Han Koh
- Division of Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore, 639818, Singapore
| | - Denise Y Lim
- Division of Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore, 639818, Singapore
| | - Alan L F Lee
- Department of Psychology, Lingnan University, Tuen Mun, Hong Kong
| |
Collapse
|
18
|
Sun J, Dong T, Liu P. Holistic processing and visual characteristics of regulated and spontaneous expressions. J Vis 2023; 23:6. [PMID: 36912592 PMCID: PMC10019490 DOI: 10.1167/jov.23.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023] Open
Abstract
The rapid and efficient recognition of facial expressions is crucial for adaptive behaviors, and holistic processing is one of the critical processing methods to achieve this adaptation. Therefore, this study integrated the effects and attentional characteristics of the authenticity of facial expressions on holistic processing. The results show that both regulated and spontaneous expressions were processed holistically. However, the spontaneous expression details did not indicate typical holistic processing, with the congruency effect observed equally for aligned and misaligned conditions. No significant difference between the two expressions was observed in terms of reaction times and eye movement characteristics (i.e., total fixation duration, fixation counts, and first fixation duration). These findings suggest that holistic processing strategies differ between the two expressions. Nevertheless, the difference was not reflected in attentional engagement.
Collapse
Affiliation(s)
- Juncai Sun
- School of Psychology, Qufu Normal University, Qufu, China.,
| | - Tiantian Dong
- Department of Psychology, Shanghai Normal University, Shanghai, China.,
| | - Ping Liu
- Department of Psychology, Shaoxing University, Shaoxing, China.,
| |
Collapse
|
19
|
Gautier J, El Haj M. Eyes don't lie: Eye movements differ during covert and overt autobiographical recall. Cognition 2023; 235:105416. [PMID: 36821995 DOI: 10.1016/j.cognition.2023.105416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/14/2023] [Accepted: 02/15/2023] [Indexed: 02/24/2023]
Abstract
In everyday life, autobiographical memories are revisited silently (i.e., covert recall) or shared with others (i.e., overt recall), yet most research regarding eye movements and autobiographical recall has focused on overt recall. With that in mind, the aim of the current study was to evaluate eye movements during the retrieval of autobiographical memories (with a focus on emotion), recollected during covert and overt recall. Forty-three participants recalled personal memories out loud and silently, while wearing eye-tracking glasses, and rated these memories in terms of mental imagery and emotional intensity. Analyses showed fewer and longer fixations, fewer and shorter saccades, and fewer blinks during covert recall compared with overt recall. Participants perceived more mental images and had a more intense emotional experience during covert recall. These results are discussed considering cognitive load theories and the various functions of autobiographical recall. We theorize that fewer and longer fixations during covert recall may be due to more intense mental imagery. This study enriches the field of research on eye movements and autobiographical memory by addressing how we retrieve memories silently, a common activity of everyday life. More broadly, our results contribute to building objective tools to measure autobiographical memory, alongside already existing subjective scales.
Collapse
Affiliation(s)
- Joanna Gautier
- Nantes Université, Univ Angers, Laboratoire de Psychologie des Pays de la Loire (LPPL - EA 4638), Chemin de la Censive du Tertre, F44000 Nantes, France.
| | - Mohamad El Haj
- Nantes Université, Univ Angers, Laboratoire de Psychologie des Pays de la Loire (LPPL - EA 4638), Chemin de la Censive du Tertre, F44000 Nantes, France; CHU Nantes, Clinical Gerontology Department, Bd Jacques Monod, F44300, Nantes, France; Institut Universitaire de France, Paris, France
| |
Collapse
|
20
|
Construction of Facial Composites from Eyewitness Memory. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1392:149-190. [DOI: 10.1007/978-3-031-13021-2_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
21
|
Li K, Lu A, Deng R, Yi H. The Unique Cost of Human Eye Gaze in Cognitive Control: Being Human-Specific and Body-Related? PSICHOLOGIJA 2022. [DOI: 10.15388/psichol.2022.59] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
This study investigated the eye gaze cost in cognitive control and whether it is human-specific and body-related. In Experiment 1, we explored whether there was a cost of human eye gaze in cognitive control and extended it by focusing on the role of emotion in the cost. Stroop effect was found to be larger in eye-gaze condition than vertical grating condition, and to be comparable across positive, negative, and neutral trials. In Experiment 2, we explored whether the eye gaze cost in cognitive control was limited to human eyes. No larger Stroop effect was found in feline eye-gaze condition, neither the modulating role of emotion. In Experiment 3, we explored whether the mouth could elicit a cost in Stroop effect. Stroop effect was not significantly larger in mouth condition compared to vertical grating condition, nor across positive, negative, and neutral conditions. The results suggest that: (1) There is a robust cost of eye gaze in cognitive control; (2) Such eye-gaze cost was specific to human eyes but not to animal eyes; (3) Only human eyes could have such eye-gaze costs but not human mouth. This study supported the notion that presentation of social cues, such as human eyes, could influence attentional processing, and provided preliminary evidence that the human eye plays an important role in cognitive processing.
Collapse
|
22
|
Akselevich V, Gilaie-Dotan S. Positive and negative facial valence perception are modulated differently by eccentricity in the parafovea. Sci Rep 2022; 12:21693. [PMID: 36522350 PMCID: PMC9755278 DOI: 10.1038/s41598-022-24919-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/22/2022] [Indexed: 12/16/2022] Open
Abstract
Understanding whether people around us are in a good, bad or neutral mood can be critical to our behavior, both when looking directly at them or when they are in our peripheral visual field. However, facial expressions of emotions are often investigated at central visual field or at locations right or left of fixation. Here we assumed that perception of facial emotional valence (the emotion's pleasantness) changes with distance from central visual field (eccentricity) and that different emotions may be influenced differently by eccentricity. Participants (n = 58) judged the valence of emotional faces across the parafovea (≤ 4°, positive (happy), negative (fearful), or neutral)) while their eyes were being tracked. As expected, performance decreased with eccentricity. Positive valence perception was least affected by eccentricity (accuracy reduction of 10-19% at 4°) and negative the most (accuracy reduction of 35-38% at 4°), and this was not a result of speed-accuracy trade-off or response biases. Within-valence (but not across-valence) performance was associated across eccentricities suggesting perception of different valences is supported by different mechanisms. While our results may not generalize to all positive and negative emotions, they indicate that beyond-foveal investigations can reveal additional characteristics of the mechanisms that underlie facial expression processing and perception.
Collapse
Affiliation(s)
- Vasilisa Akselevich
- grid.22098.310000 0004 1937 0503School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002 Ramat Gan, Israel ,grid.22098.310000 0004 1937 0503The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- grid.22098.310000 0004 1937 0503School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002 Ramat Gan, Israel ,grid.22098.310000 0004 1937 0503The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel ,grid.83440.3b0000000121901201UCL Institute of Cognitive Neuroscience, London, UK
| |
Collapse
|
23
|
Age Effects in Emotional Memory and Associated Eye Movements. Brain Sci 2022; 12:brainsci12121719. [PMID: 36552178 PMCID: PMC9776083 DOI: 10.3390/brainsci12121719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
Mnemonic enhanced memory has been observed for negative events. Here, we investigate its association with spatiotemporal attention, consolidation, and age. An ingenious method to study visual attention for emotional stimuli is eye tracking. Twenty young adults and twenty-one older adults encoded stimuli depicting neutral faces, angry faces, and houses while eye movements were recorded. The encoding phase was followed by an immediate and delayed (48 h) recognition assessment. Linear mixed model analyses of recognition performance with group, emotion, and their interaction as fixed effects revealed increased performance for angry compared to neutral faces in the young adults group only. Furthermore, young adults showed enhanced memory for angry faces compared to older adults. This effect was associated with a shorter fixation duration for angry faces compared to neutral faces in the older adults group. Furthermore, the results revealed that total fixation duration was a strong predictor for face memory performance.
Collapse
|
24
|
Effect of perceived eye gaze on the N170 component – A systematic review. Neurosci Biobehav Rev 2022; 143:104913. [DOI: 10.1016/j.neubiorev.2022.104913] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 10/03/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
|
25
|
Broers N, Bainbridge WA, Michel R, Balestrieri E, Busch NA. The extent and specificity of visual exploration determines the formation of recollected memories in complex scenes. J Vis 2022; 22:9. [PMID: 36227616 PMCID: PMC9583750 DOI: 10.1167/jov.22.11.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Our visual memories of complex scenes often appear as robust, detailed records of the past. Several studies have demonstrated that active exploration with eye movements improves recognition memory for scenes, but it is unclear whether this improvement is due to stronger feelings of familiarity or more detailed recollection. We related the extent and specificity of fixation patterns at encoding and retrieval to different recognition decisions in an incidental memory paradigm. After incidental encoding of 240 real-world scene photographs, participants (N = 44) answered a surprise memory test by reporting whether an image was new, remembered (indicating recollection), or just known to be old (indicating familiarity). To assess the specificity of their visual memories, we devised a novel report procedure in which participants selected the scene region that they specifically recollected, that appeared most familiar, or that was particularly new to them. At encoding, when considering the entire scene,subsequently recollected compared to familiar or forgotten scenes showed a larger number of fixations that were more broadly distributed, suggesting that more extensive visual exploration determines stronger and more detailed memories. However, when considering only the memory-relevant image areas, fixations were more dense and more clustered for subsequently recollected compared to subsequently familiar scenes. At retrieval, the extent of visual exploration was more restricted for recollected compared to new or forgotten scenes, with a smaller number of fixations. Importantly, fixation density and clustering was greater in memory-relevant areas for recollected versus familiar or falsely recognized images. Our findings suggest that more extensive visual exploration across the entire scene, with a subset of more focal and dense fixations in specific image areas, leads to increased potential for recollecting specific image aspects.
Collapse
Affiliation(s)
- Nico Broers
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | | | - René Michel
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | - Elio Balestrieri
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| | - Niko A Busch
- Institute of Psychology, University of Münster, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.,
| |
Collapse
|
26
|
The role of discriminability in face perception: Interference processing of expression, gender, and gaze. Atten Percept Psychophys 2022; 84:2281-2292. [PMID: 36076120 DOI: 10.3758/s13414-022-02561-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2022] [Indexed: 11/08/2022]
Abstract
Eye gaze plays a fundamental role in social interaction and facial recognition. However, interference processing between gaze and other facial variants (e.g., expression) and invariant information (e.g., gender) remains controversial and unclear, especially the role of facial information discriminability in interference. A Garner paradigm was used to conduct two experiments. This paradigm allows simultaneous investigation of the mutual influence of two kinds of facial information in one experiment. In Experiment 1, we manipulated facial expression discriminability and investigated its role in interference processing of gaze and facial expression. The results show that individuals were unable to ignore expression when classifying gaze with both high and low discriminability but could ignore gaze when classifying expression with high discriminability only. In Experiment 2, we manipulated gender discriminability and investigated its function in interference processing of gaze and gender. Participants were unable to ignore gender when classifying gaze with both high and low discriminability but could ignore gaze when classifying gender with low discriminability only. The results indicate that gaze categorization is affected by facial expression and gender regardless of facial information discriminability, whereas interference of gaze on facial expression and gender depends on the degree of discriminability. The present study provides evidence that the processing of gaze and other variant and invariant information is interdependent.
Collapse
|
27
|
Kawakami K, Vingilis-Jaremko L, Friesen JP, Meyers C, Fang X. Impact of similarity on recognition of faces of Black and White targets. Br J Psychol 2022; 113:1079-1099. [PMID: 35957498 DOI: 10.1111/bjop.12589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 04/18/2022] [Accepted: 07/27/2022] [Indexed: 11/28/2022]
Abstract
One reason for the persistence of racial inequality may be anticipated dissimilarity with racial outgroups. In the present research, we explored the impact of perceived similarity with White and Black targets on facial identity recognition accuracy. In two studies, participants first completed an ostensible personality survey. Next, in a Learning Phase, Black and White faces were presented on one of three background colours. Participants were led to believe that these colours indicated similarities between them and the target person in the image. Specifically, they were informed that the background colours were associated with the extent to which responses by the target person on the personality survey and their own responses overlapped. In actual fact, faces were randomly assigned to colour. In both studies, non-Black participants (Experiment 1) and White participants (Experiment 2) showed better recognition of White than Black faces. More importantly in the present context, a positive linear effect of similarity was found in both studies, with better recognition of increasingly similar Black and White targets. The independent effects for race of target and similarity, with no interaction, indicated that participants responded to Black and White faces according to category membership as well as on an interpersonal level related to similarity with specific targets. Together these findings suggest that while perceived similarity may enhance identity recognition accuracy for Black and White faces, it may not reduce differences in facial memory for these racial categories.
Collapse
Affiliation(s)
| | | | | | | | - Xia Fang
- Zhejiang University, Zhejiang, China
| |
Collapse
|
28
|
Rossion B. Twenty years of investigation with the case of prosopagnosia PS to understand human face identity recognition. Part I: Function. Neuropsychologia 2022; 173:108278. [DOI: 10.1016/j.neuropsychologia.2022.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/28/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022]
|
29
|
Ma X, Fu M, Zhang X, Song X, Becker B, Wu R, Xu X, Gao Z, Kendrick K, Zhao W. Own Race Eye-Gaze Bias for All Emotional Faces but Accuracy Bias Only for Sad Expressions. Front Neurosci 2022; 16:852484. [PMID: 35645716 PMCID: PMC9133890 DOI: 10.3389/fnins.2022.852484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 04/26/2022] [Indexed: 12/05/2022] Open
Abstract
Own race faces tend to be recognized more accurately than those of other less familiar races, however, findings to date have been inconclusive. The present study aimed to determine whether Chinese exhibit different recognition accuracy and eye gaze patterns for Asian (own-race) and White (other-race) facial expressions (neutral, happiness, sadness, anger, disgust, fear). A total of 89 healthy Chinese adults viewed Asian and White facial expressions while undergoing eye-tracking and were subsequently required to identify expressions and rate their intensity and effect on arousal. Results revealed that subjects recognized sad expressions in Asian faces better than in White ones. On the other hand, recognition accuracy was higher for White neutral, happy, fearful, and disgusted expressions although this may have been due to subjects more often misclassifying these Asian expressions as sadness. Moreover, subjects viewed the eyes of emotional expressions longer in Asian compared to White faces and the nose of sad ones, especially during the late phase of presentation, whereas pupil sizes, indicative of cognitive load and arousal, were smaller. Eye-gaze patterns were not, however, associated with recognition accuracy. Overall, findings demonstrate an own-race bias in Chinese for identifying sad expressions and more generally across emotional expressions in terms of viewing the eye region of emotional faces for longer and with reduced pupil size. Interestingly, subjects were significantly more likely to miss-identify Asian faces as sad resulting in an apparent other-race bias for recognizing neutral, happy, fearful, and disgusted expressions.
Collapse
Affiliation(s)
- Xiaole Ma
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
- School of Education Science, Shanxi University, Taiyuan, China
| | - Meina Fu
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaolu Zhang
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Xinwei Song
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Benjamin Becker
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Renjing Wu
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaolei Xu
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhao Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Keith Kendrick
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
- *Correspondence: Keith Kendrick,
| | - Weihua Zhao
- Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
- Weihua Zhao,
| |
Collapse
|
30
|
Wahlheim CN, Eisenberg ML, Stawarczyk D, Zacks JM. Understanding Everyday Events: Predictive-Looking Errors Drive Memory Updating. Psychol Sci 2022; 33:765-781. [PMID: 35439426 PMCID: PMC9248286 DOI: 10.1177/09567976211053596] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Memory-guided predictions can improve event comprehension by guiding attention and the eyes to the location where an actor is about to perform an action. But when events change, viewers may experience predictive-looking errors and need to update their memories. In two experiments (Ns = 38 and 98), we examined the consequences of mnemonic predictive-looking errors for comprehending and remembering event changes. University students watched movies of everyday activities with actions that were repeated exactly and actions that were repeated with changed features-for example, an actor reached for a paper towel on one occasion and a dish towel on the next. Memory guidance led to predictive-looking errors that were associated with better memory for subsequently changed event features. These results indicate that retrieving recent event features can guide predictions during unfolding events and that error signals derived from mismatches between mnemonic predictions and actual events contribute to new learning.
Collapse
Affiliation(s)
| | - Michelle L Eisenberg
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | - David Stawarczyk
- Department of Psychological and Brain Sciences, Washington University in St. Louis.,Department of Psychology, Psychology and Neuroscience of Cognition Research Unit, University of Liège
| | - Jeffrey M Zacks
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| |
Collapse
|
31
|
Hsiao JHW, Liao W, Tso RVY. Impact of mask use on face recognition: an eye-tracking study. Cogn Res Princ Implic 2022; 7:32. [PMID: 35394572 PMCID: PMC8990495 DOI: 10.1186/s41235-022-00382-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Accepted: 03/21/2022] [Indexed: 12/05/2022] Open
Abstract
We examined how mask use affects performance and eye movements in face recognition and whether strategy change reflected in eye movements is associated with performance change. Eighty-eight participants performed face recognition with masked faces either during learning only, during recognition only, or during both learning and recognition. As compared with the baseline condition where faces were unmasked during both learning and recognition, participants had impaired performance in all three scenarios, with larger impairment when mask conditions during learning and recognition did not match. When recognizing unmasked faces, whether the faces were learned with or without a mask on did not change eye movement behavior. Nevertheless, when recognizing unmasked faces that were learned with a mask on, participants who adopted more eyes-focused patterns had less performance impairment as compared with the baseline condition. When recognizing masked faces, participants had more eyes-focused patterns and more consistent gaze transition behavior than recognizing unmasked faces regardless of whether the faces were learned with or without a mask on. Nevertheless, when recognizing masked faces that were learned without a mask, participants whose gaze transition behavior was more consistent had less performance impairment as compared with the baseline condition. Thus, although eye movements during recognition were mainly driven by the mask condition during recognition but not that during learning, those who adjusted their strategy according to the mask condition difference between learning and recognition had better performance. This finding has important implications for identifying populations vulnerable to the impact of mask use and potential remedial strategies.
Collapse
Affiliation(s)
- Janet Hui-Wen Hsiao
- Department of Psychology, University of Hong Kong, Pokfulam Road, Hong Kong, Hong Kong SAR, China. .,The State Key Laboratory of Brain and Cognitive Sciences, University of Hong Kong, Hong Kong, Hong Kong SAR, China.
| | - Weiyan Liao
- Department of Psychology, University of Hong Kong, Pokfulam Road, Hong Kong, Hong Kong SAR, China
| | - Ricky Van Yip Tso
- Department of Psychology, The Education University of Hong Kong, Tai Po, New Territories, Hong Kong SAR, China.,Psychological Assessment and Clinical Research Unit, The Education University of Hong Kong, Tai Po, New Territories, Hong Kong SAR, China
| |
Collapse
|
32
|
Mazloum-Farzaghi N, Shing N, Mendoza L, Barense MD, Ryan JD, Olsen RK. The impact of aging and repetition on eye movements and recognition memory. AGING, NEUROPSYCHOLOGY, AND COGNITION 2022; 30:402-428. [PMID: 35189778 DOI: 10.1080/13825585.2022.2039587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The modulation of gaze fixations on neural activity in the hippocampus, a region critical for memory, has been shown to be weaker in older adults compared to younger adults. However, as such research has relied on indirect measures of memory, it remains unclear whether the relationship between visual exploration and direct measures of memory is similarly disrupted in aging. The current study tested older and younger adults on a face memory eye-tracking task previously used by our group that showed that recognition memory for faces presented across variable, but not fixed, viewpoints relies on a hippocampal-dependent binding function. Here, we examined how aging influences eye movement measures that reveal the amount (cumulative sampling) and extent (distribution of gaze fixations) of visual exploration. We also examined how aging influences direct (subsequent conscious recognition) and indirect (eye movement repetition effect) expressions of memory. No age differences were found in direct recognition regardless of facial viewpoint. However, the eye movement measures revealed key group differences. Compared to younger adults, older adults exhibited more cumulative sampling, a different distribution of fixations, and a larger repetition effect. Moreover, there was a positive relationship between cumulative sampling and direct recognition in younger adults, but not older adults. Neither age group showed a relationship between the repetition effect and direct recognition. Thus, despite similar direct recognition, age-related differences were observed in visual exploration and in an indirect eye-movement memory measure, suggesting that the two groups may acquire, retain, and use different facial information to guide recognition.
Collapse
Affiliation(s)
- Negar Mazloum-Farzaghi
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| | - Nathanael Shing
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| | - Leanne Mendoza
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| | - Morgan D. Barense
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| | - Jennifer D. Ryan
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| | - Rosanna K. Olsen
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- The Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| |
Collapse
|
33
|
Anger or happiness superiority effect: A face in the crowd study involving nine emotions expressed by nine people. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-02762-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
The Anatomy behind Eyebrow Positioning: A Clinical Guide Based on Current Anatomic Concepts. Plast Reconstr Surg 2022; 149:869-879. [PMID: 35139063 DOI: 10.1097/prs.0000000000008966] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND The position of the eyebrow is known to reflect emotional status and to provide a plethora of nonverbal information. Although the eyebrow has no direct attachment to underlying bone, it is subject to the interplay between the various periorbital muscles, which when acting together, permit important nonverbal cues to be conveyed. Understanding the balance and interplay between these muscles is of crucial importance when targeting the periorbital area with neuromodulators. The authors' aims were to summarize current anatomic and clinical knowledge so as to provide a foundation that physicians can rely on to improve and increase the predictability of patient outcomes when treating the periorbital region with neuromodulators for aesthetic purposes. METHODS This narrative review is based on the anatomic and clinical experience of the authors dissecting and treating the periorbital region with specific focus on the glabella and the forehead. RESULTS This narrative review covers (1) a brief description of the relevant periorbital muscle anatomy, (2) an analysis of each muscle's contribution to various facial expressions, and (3) an anatomic and physiologic simulation of the muscular effects of specific neuromodulator injection sites. CONCLUSION By understanding functional anatomy of the periorbital muscles and combining this knowledge with individualized assessment and treatment planning, it is possible to achieve aesthetically pleasing, predictable, and reproducible treatment outcomes that positively impact perception of nonverbal cues when administering neuromodulators.
Collapse
|
35
|
Yao L, Dai Q, Wu Q, Liu Y, Yu Y, Guo T, Zhou M, Yang J, Takahashi S, Ejima Y, Wu J. Eye Size Affects Cuteness in Different Facial Expressions and Ages. Front Psychol 2022; 12:674456. [PMID: 35087437 PMCID: PMC8786738 DOI: 10.3389/fpsyg.2021.674456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 12/01/2021] [Indexed: 11/13/2022] Open
Abstract
Researchers have suggested that infants exhibiting baby schema are considered cute. These similar studies have mainly focused on changes in overall baby schema facial features. However, whether a change in only eye size affects the perception of cuteness across different facial expressions and ages has not been explicitly evaluated until now. In the present study, a paired comparison method and 7-point scale were used to investigate the effects of eye size on perceived cuteness across facial expressions (positive, neutral, and negative) and ages (adults and infants). The results show that stimuli with large eyes were perceived to be cuter than both unmanipulated eyes and small eyes across all facial expressions and age groups. This suggests not only that the effect of baby schema on cuteness is based on changes in a set of features but also that eye size as an individual feature can affect the perception of cuteness.
Collapse
Affiliation(s)
- Lichang Yao
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Qi Dai
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Qiong Wu
- School of Education, Suzhou University of Science and Technology, Suzhou, China
| | - Yang Liu
- School of Education, Suzhou University of Science and Technology, Suzhou, China
| | - Yiyang Yu
- Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Ting Guo
- Cognitive Neuroscience Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Mengni Zhou
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jiajia Yang
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Satoshi Takahashi
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Yoshimichi Ejima
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China.,School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
36
|
Pollmann S, Schneider WX. Working memory and active sampling of the environment: Medial temporal contributions. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:339-357. [PMID: 35964982 DOI: 10.1016/b978-0-12-823493-8.00029-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Working memory (WM) refers to the ability to maintain and actively process information-either derived from perception or long-term memory (LTM)-for intelligent thought and action. This chapter focuses on the contributions of the temporal lobe, particularly medial temporal lobe (MTL) to WM. First, neuropsychological evidence for the involvement of MTL in WM maintenance is reviewed, arguing for a crucial role in the case of retaining complex relational bindings between memorized features. Next, MTL contributions at the level of neural mechanisms are covered-with a focus on WM encoding and maintenance, including interactions with ventral temporal cortex. Among WM use processes, we focus on active sampling of environmental information, a key input source to capacity-limited WM. MTL contributions to the bidirectional relationship between active sampling and memory are highlighted-WM control of active sampling and sampling as a way of selecting input to WM. Memory-based sampling studies relying on scene and object inspection, visual-based exploration behavior (e.g., vicarious behavior), and memory-guided visual search are reviewed. The conclusion is that MTL serves an important function in the selection of information from perception and transfer from LTM to capacity-limited WM.
Collapse
Affiliation(s)
- Stefan Pollmann
- Department of Psychology and Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany.
| | - Werner X Schneider
- Department of Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
37
|
Masulli P, Galazka M, Eberhard D, Johnels JÅ, Gillberg C, Billstedt E, Hadjikhani N, Andersen TS. Data-driven analysis of gaze patterns in face perception: Methodological and clinical contributions. Cortex 2021; 147:9-23. [PMID: 34998084 DOI: 10.1016/j.cortex.2021.11.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/18/2021] [Accepted: 11/12/2021] [Indexed: 01/05/2023]
Abstract
Gaze patterns during face perception have been shown to relate to psychiatric symptoms. Standard analysis of gaze behavior includes calculating fixations within arbitrarily predetermined areas of interest. In contrast to this approach, we present an objective, data-driven method for the analysis of gaze patterns and their relation to diagnostic test scores. This method was applied to data acquired in an adult sample (N = 111) of psychiatry outpatients while they freely looked at images of human faces. Dimensional symptom scores of autism, attention deficit, and depression were collected. A linear regression model based on Principal Component Analysis coefficients computed for each participant was used to model symptom scores. We found that specific components of gaze patterns predicted autistic traits as well as depression symptoms. Gaze patterns shifted away from the eyes with increasing autism traits, a well-known effect. Additionally, the model revealed a lateralization component, with a reduction of the left visual field bias increasing with both autistic traits and depression symptoms independently. Taken together, our model provides a data-driven alternative for gaze data analysis, which can be applied to dimensionally-, rather than categorically-defined clinical subgroups within a variety of contexts. Methodological and clinical contribution of this approach are discussed.
Collapse
Affiliation(s)
- Paolo Masulli
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark; iMotions A/S, Copenhagen V, Denmark
| | - Martyna Galazka
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - David Eberhard
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden.
| | | | | | - Eva Billstedt
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, University of Gothenburg, Gothenburg, Sweden; Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, USA.
| | - Tobias S Andersen
- Department of Applied Mathematics and Computer Science DTU Compute, Section of Cognitive Systems, Technical University of Denmark, Kgs. Lyngby, Denmark
| |
Collapse
|
38
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|
39
|
Vassallo S, Douglas J. Visual scanpath training to emotional faces following severe traumatic brain injury: A single case design. J Eye Mov Res 2021; 14. [PMID: 34760060 PMCID: PMC8575428 DOI: 10.16910/jemr.14.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over a 3-month period using a single case design (AB) with one follow up session. At baseline BR's scanpath was restricted, characterised by gaze allocation primarily to salient facial features on the right side of the face stimulus. Following intervention his visual scanpath became more lateralised, although he continued to demonstrate an attentional bias to the right side of the face stimulus. This study is the first to demonstrate change in both the pattern and the position of the visual scanpath to emotional faces following intervention in a person with chronic severe TBI. In addition, these findings extend upon our previous work to suggest that modification of the visual scanpath through targeted facial feature training can support improved facial recognition performance in a person with severe TBI.
Collapse
Affiliation(s)
- Suzane Vassallo
- La Trobe University, Melbourne, Australia.,University of Technology, Sydney, Australia
| | - Jacinta Douglas
- La Trobe University, Melbourne, Australia.,Summer Foundation, Melbourne, Australia
| |
Collapse
|
40
|
Howard SR, Dyer AG, Garcia JE, Giurfa M, Reser DH, Rosa MGP, Avarguès-Weber A. Naïve and Experienced Honeybee Foragers Learn Normally Configured Flowers More Easily Than Non-configured or Highly Contrasted Flowers. Front Ecol Evol 2021. [DOI: 10.3389/fevo.2021.662336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Angiosperms have evolved to attract and/or deter specific pollinators. Flowers provide signals and cues such as scent, colour, size, pattern, and shape, which allow certain pollinators to more easily find and visit the same type of flower. Over evolutionary time, bees and angiosperms have co-evolved resulting in flowers being more attractive to bee vision and preferences, and allowing bees to recognise specific flower traits to make decisions on where to forage. Here we tested whether bees are instinctively tuned to process flower shape by training both flower-experienced and flower-naïve honeybee foragers to discriminate between pictures of two different flower species when images were either normally configured flowers or flowers which were scrambled in terms of spatial configuration. We also tested whether increasing picture contrast, to make flower features more salient, would improve or impair performance. We used four flower conditions: (i) normally configured greyscale flower pictures, (ii) scrambled flower configurations, (iii) high contrast normally configured flowers, and (iv) asymmetrically scrambled flowers. While all flower pictures contained very similar spatial information, both experienced and naïve bees were better able to learn to discriminate between normally configured flowers than between any of the modified versions. Our results suggest that a specialisation in flower recognition in bees is due to a combination of hard-wired neural circuitry and experience-dependent factors.
Collapse
|
41
|
Kawakami K, Friesen JP, Williams A, Vingilis-Jaremko L, Sidhu DM, Rodriguez-Bailón R, Cañadas E, Hugenberg K. Impact of perceived interpersonal similarity on attention to the eyes of same-race and other-race faces. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:68. [PMID: 34727302 PMCID: PMC8563912 DOI: 10.1186/s41235-021-00336-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 10/12/2021] [Indexed: 11/19/2022]
Abstract
One reason for the persistence of racial discrimination may be anticipated dissimilarity with racial outgroup members that prevent meaningful interactions. In the present research, we investigated whether perceived similarity would impact the processing of same-race and other-race faces.
Specifically, in two experiments, we varied the extent to which White participants were ostensibly similar to targets via bogus feedback on a personality test. With an eye tracker, we measured the effect of this manipulation on attention to the eyes, a critical region for person perception and face memory. In Experiment 1, we monitored the impact of perceived interpersonal similarity on White participants’ attention to the eyes of same-race White targets. In Experiment 2, we replicated this procedure, but White participants were presented with either same-race White targets or other-race Black targets in a between-subjects design. The pattern of results in both experiments indicated a positive linear effect of similarity—greater perceived similarity between participants and targets predicted more attention to the eyes of White and Black faces. The implications of these findings related to top-down effects of perceived similarity for our understanding of basic processes in face perception, as well as intergroup relations, are discussed.
Collapse
Affiliation(s)
| | | | | | - Larissa Vingilis-Jaremko
- York University, Toronto, Canada.,Canadian Association for Girls in Science, Mississauga, Canada
| | | | | | | | | |
Collapse
|
42
|
Hine K, Okubo H. Overestimation of eye size: People see themselves with bigger eyes in a holistic approach. Acta Psychol (Amst) 2021; 220:103419. [PMID: 34543806 DOI: 10.1016/j.actpsy.2021.103419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 07/29/2021] [Accepted: 09/14/2021] [Indexed: 11/29/2022] Open
Abstract
A face contains crucial information for identification; moreover, face recognition is superior to other types of recognition. Notably, one's own face is recognized better than other familiar faces. However, it is unclear whether one's own face, especially one's own internal facial features, is represented more accurately than other faces. Here, we investigated how one's own internal facial features were represented. We conducted a psychological experiment in which the participants were required to adjust eye size to the real size in photos of their own or well-known celebrities' faces. To investigate why individuals' own and celebrity facial representations were different, two types of photos were prepared, with and without external features. It was found that the accuracy of eye size for one's own face was better than that for celebrities' faces in the condition without external features, in which holistic processing was less involved than in the condition with external features. This implies that the eye size of one's own face was represented more accurately than that of other familiar faces when external features were removed. Moreover, the accuracy of the eye size of one's own face in the condition with external features was worse than that in the condition without external features; the adjusted eye size in the condition with external features was larger than that in the condition without external features. In contrast, for celebrities' faces, there was no significant difference between the conditions with and without external features. The adjusted eye sizes in all conditions were overestimated compared to real eye sizes. Previous research indicated that eye size was adjusted to a larger size when evaluating as more attractive, in which the evaluation is related to holistic processing. Based on this perspective, it could be that one's own face was represented as more attractive in the condition with external features in the current study. Taken together, the results indicated that the representation of own eye size, which is an internal facial feature, was affected by the visibility of the external features.
Collapse
Affiliation(s)
- Kyoko Hine
- Toyohashi University of Technology, Toyohashi, Aichi, Japan.
| | - Hikaru Okubo
- Department of Information Environment, Tokyo Denki University, Adachi-ku, Tokyo, Japan
| |
Collapse
|
43
|
Martin SA, Morrison SD, Patel V, Capitán-Cañadas F, Sánchez-García A, Rodríguez-Conesa M, Bellinga RJ, Simon D, Capitán L, Satterwhite T, Nazerali R. Social Perception of Facial Feminization Surgery Outcomes: Does Gender Identity Alter Gaze? Aesthet Surg J 2021; 41:1207-1215. [PMID: 33336697 DOI: 10.1093/asj/sjaa377] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The evaluation of gender-affirming facial feminization surgery (FFS) outcomes can be highly subjective, which has resulted in a limited understanding of the social perception of favorable gender and aesthetic facial appearance following FFS. Eye-tracking technology has introduced an objective measure of viewer subconscious gaze. OBJECTIVES The aim of this study was to use eye-tracking technology to measure attention and perception of surgery-naive cisgender female and feminized transgender faces, based on viewer gender identity. METHODS Thirty-two participants (18 cisgender and 14 transgender) were enrolled and shown 5 photographs each of surgery-naive cisgender female and feminized transgender faces. Gaze was captured with a Tobii Pro X2-60 eye-tracking device (Tobii, Stockholm, Sweden) and participants rated the gender and aesthetic appearance of each face on Likert-type scales. RESULTS Total image gaze fixation time did not differ by participant gender identity (6.00 vs 6.04 seconds, P = 0.889); however, transgender participants spent more time evaluating the forehead/brow, buccal/mandibular regions, and chin (P < 0.001). Multivariate regression analysis showed significant associations between viewer gender identity, age, race, and education, and the time spent evaluating gender salient facial features. Feminized faces were rated as more masculine with poorer aesthetic appearance than surgery-naive cisgender female faces; however, there was no significant difference in the distribution of gender appearance ratings assigned to each photograph by cisgender and transgender participants. CONCLUSIONS These results demonstrate that gender identity influences subconscious attention and gaze on female faces. Nevertheless, differences in gaze distribution did not correspond to subjective rated gender appearance for either surgery-naive cisgender female or feminized transgender faces, further illustrating the complexity of evaluating social perception of favorable FFS outcomes.
Collapse
Affiliation(s)
| | - Shane D Morrison
- Division of Plastic Surgery, Department of Surgery, University of Washington School of Medicine, Seattle, WA, USA
| | - Viren Patel
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Anabel Sánchez-García
- FACIALTEAM Surgical Group, HC Marbella International Hospital, Marbella, Málaga, Spain
| | | | - Raúl J Bellinga
- FACIALTEAM Surgical Group, HC Marbella International Hospital, Marbella, Málaga, Spain
| | - Daniel Simon
- FACIALTEAM Surgical Group, HC Marbella International Hospital, Marbella, Málaga, Spain
| | - Luis Capitán
- FACIALTEAM Surgical Group, HC Marbella International Hospital, Marbella, Málaga, Spain
| | | | - Rahim Nazerali
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Stanford University, Palo Alto, CA, USA
| |
Collapse
|
44
|
Kanovský M, Halamová J, Strnádelová B, Moro R, Bielikova M. Pupil size variation in primary facial expressions–testing potential biomarker of self-criticism. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10057-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
45
|
Automatic gaze to the nose region cannot be inhibited during observation of facial expression in Eastern observers. Conscious Cogn 2021; 94:103179. [PMID: 34364139 DOI: 10.1016/j.concog.2021.103179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 07/24/2021] [Accepted: 07/24/2021] [Indexed: 11/22/2022]
Abstract
Humans can extract a great deal of information about others very quickly. This is partly because the face automatically captures observers' attention. Specifically, the eyes can attract overt attention. Although it has been reported that not only the eyes but also the nose can capture initial oculomotor movement in Eastern observers, its generalizability remains unknown. In this study, we applied the "don't look" paradigm wherein participants are asked not to fixate on a specific facial region (i.e., eyes, nose, and mouth) during an emotion recognition task with upright (Experiment 1) and inverted (Experiment 2) faces. In both experiments, we found that participants were less able to inhibit the initial part of their fixations to the nose, which can be interpreted as the nose automatically capturing attention. Along with previous studies, our overt attention tends to be attracted by a part of the face, which is the nose region in Easterner observers.
Collapse
|
46
|
Wynn JS, Liu ZX, Ryan JD. Neural Correlates of Subsequent Memory-Related Gaze Reinstatement. J Cogn Neurosci 2021; 34:1547-1562. [PMID: 34272959 DOI: 10.1162/jocn_a_01761] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Mounting evidence linking gaze reinstatement-the recapitulation of encoding-related gaze patterns during retrieval-to behavioral measures of memory suggests that eye movements play an important role in mnemonic processing. Yet, the nature of the gaze scanpath, including its informational content and neural correlates, has remained in question. In this study, we examined eye movement and neural data from a recognition memory task to further elucidate the behavioral and neural bases of functional gaze reinstatement. Consistent with previous work, gaze reinstatement during retrieval of freely viewed scene images was greater than chance and predictive of recognition memory performance. Gaze reinstatement was also associated with viewing of informationally salient image regions at encoding, suggesting that scanpaths may encode and contain high-level scene content. At the brain level, gaze reinstatement was predicted by encoding-related activity in the occipital pole and BG, neural regions associated with visual processing and oculomotor control. Finally, cross-voxel brain pattern similarity analysis revealed overlapping subsequent memory and subsequent gaze reinstatement modulation effects in the parahippocampal place area and hippocampus, in addition to the occipital pole and BG. Together, these findings suggest that encoding-related activity in brain regions associated with scene processing, oculomotor control, and memory supports the formation, and subsequent recapitulation, of functional scanpaths. More broadly, these findings lend support to scanpath theory's assertion that eye movements both encode, and are themselves embedded in, mnemonic representations.
Collapse
Affiliation(s)
| | | | - Jennifer D Ryan
- Rotman Research Institute at Baycrest Health Sciences.,University of Toronto
| |
Collapse
|
47
|
Frank K, Schuster L, Alfertshofer M, Baumbach SF, Herterich V, Giunta RE, Moellhoff N, Braig D, Ehrl D, Cotofana S. How Does Wearing a Facecover Influence the Eye Movement Pattern in Times of COVID-19? Aesthet Surg J 2021; 41:NP1118-NP1124. [PMID: 33693469 PMCID: PMC7989657 DOI: 10.1093/asj/sjab121] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Background Since the emergence of the COVID-19 pandemic facecovers have become a common sight. The effect of facecovers on the gaze when looking at faces has not been assessed yet. Objective The aim of the present study is to investigate a potential difference in eye movement pattern in observes which are exposed to images showing a face without and with facecover to identify if there is truly a change of gaze when identifying (masked) facial features. Materials and Methods The eye movement of a total of 64 study participants (28 males and 36 females) with a mean age of 31.84±9.0 years was analyzed in this cross-sectional observational study. Eye movement analysis was conducted based on positional changes of eye features within an x- and y- coordinate system while two images (face without/with facecover) were displayed for 8 seconds. Results The results of this study revealed that the sequence of focussing on facial regions was not altered when wearing a facecover and followed the sequence: perioral, nose, periorbital. Wearing a facecover significantly increased the time of focussing on the periorbital region and increased also the number of repeated eye fixations during the interval of visual stimulus presentation. No statistically significant differences were observed between male and female participants in their eye movement pattern across all investigated variables with p > 0.433. Conclusion Aesthetic practitioners could utilized the presented data and develop marketing and treatment strategies which majorly target the periorbital area understanding the altered eye movement pattern in times of COVID-19.
Collapse
Affiliation(s)
- Konstantin Frank
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Luca Schuster
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Michael Alfertshofer
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Sebastian Felix Baumbach
- Department of General, Trauma and Reconstructive Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Viktoria Herterich
- Department of General, Trauma and Reconstructive Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Riccardo E Giunta
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Nicholas Moellhoff
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - David Braig
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Denis Ehrl
- Department for Hand, Plastic and Aesthetic Surgery, Ludwig Maximilian University of Munich, Munich, Germany
| | - Sebastian Cotofana
- Department of Clinical Anatomy, Mayo Clinic College of Medicine and Science, Rochester, MN, USA
| |
Collapse
|
48
|
Harrison MT, Strother L. Does face-selective cortex show a left visual field bias for centrally-viewed faces? Neuropsychologia 2021; 159:107956. [PMID: 34265343 DOI: 10.1016/j.neuropsychologia.2021.107956] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 07/07/2021] [Accepted: 07/09/2021] [Indexed: 12/29/2022]
Abstract
The left half of a centrally-viewed face contributes more strongly to recognition performance than the right. This left visual field (LVF) advantage is typically attributed to an untested assumption that face-selective cortex in the right hemisphere (RH) exhibits a contralateral bias, even for centrally-viewed faces. We tested the validity of this assumption using a behavioral measure of the LVF advantage and an fMRI experiment that measured laterality of face-selective cortex and neural contralateral bias. In the behavioral experiment, participants performed a chimeric face-matching task (Harrison and Strother, 2019). In the fMRI experiment, participants viewed chimeric faces comprised of face halves that either repeated or changed simultaneously in both hemifields, or repeated in one hemifield and changed in the other. This enabled us to measure lateralization of fMRI face-repetition suppression and hemifield-specific half-face sensitivity in face-selective cortex. We found that LVF bias in the fusiform face area (FFA) and right-lateralization of the FFA for changing versus repeated faces were both positively correlated with a behavioral measure of the LVF advantage for upright (but not inverted) faces. Results from regression analyses showed that LVF bias in the right FFA and FFA laterality make separable contributions to the prediction of our behavioral measure of the LVF bias for upright faces. Our results confirm a ubiquitous but previously untested assumption that RH superiority combined with contralateral bias in face-selective cortex explains the LVF advantage in face recognition. Specifically, our results show that neural LVF bias in the right FFA is sufficient to explain the relationship between FFA laterality and the perceptual LVF bias for centrally-viewed faces.
Collapse
Affiliation(s)
- Matthew T Harrison
- University of Nevada Reno Institute for Neuroscience, Department of Psychology, MS0296 1664 N. Virginia Street Reno, NV, 89557, USA.
| | - Lars Strother
- University of Nevada Reno Institute for Neuroscience, Department of Psychology, MS0296 1664 N. Virginia Street Reno, NV, 89557, USA
| |
Collapse
|
49
|
Stelter M, Rommel M, Degner J. (Eye-) Tracking the Other-Race Effect: Comparison of Eye Movements During Encoding and Recognition of Ingroup Faces With Proximal and Distant Outgroup Faces. SOCIAL COGNITION 2021. [DOI: 10.1521/soco.2021.39.3.366] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
People experience difficulties recognizing faces of ethnic outgroups, known as the other-race effect. The present eye-tracking study investigates if this effect is related to differences in visual attention to ingroup and outgroup faces. We measured gaze fixations to specific facial features and overall eye-movement activity level during an old/new recognition task comparing ingroup faces with proximal and distal ethnic outgroup faces. Recognition was best for ingroup faces and decreased gradually for proximal and distal outgroup faces. Participants attended more to the eyes of ingroup faces than outgroup faces, but this effect was unrelated to recognition performance. Ingroup-outgroup differences in eye-movement activity level did not emerge during the study phase, but during the recognition phase, with ingroup-outgroup differences varying as a function of recognition accuracy and old/new effects. Overall, ingroup-outgroup effects on recognition performance and eye movements were more pronounced for recognition of new items, emphasizing the role of retrieval processes.
Collapse
|
50
|
A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm. Behav Res Methods 2021; 53:2049-2068. [PMID: 33754324 PMCID: PMC8516795 DOI: 10.3758/s13428-020-01513-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2020] [Indexed: 11/08/2022]
Abstract
We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.
Collapse
|