1
|
Keck J, Honekamp C, Gebhardt K, Nolte S, Linka M, de Haas B, Munzert J, Krüger K, Krüger B. Exercise-induced inflammation alters the perception and visual exploration of emotional interactions. Brain Behav Immun Health 2024; 39:100806. [PMID: 38974339 PMCID: PMC11225855 DOI: 10.1016/j.bbih.2024.100806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 05/08/2024] [Accepted: 06/10/2024] [Indexed: 07/09/2024] Open
Abstract
Introduction The study aimed to investigate whether an exercise-induced pro-inflammatory response alters the perception as well as visual exploration of emotional body language in social interactions. Methods In a within-subject design, 19 male, healthy adults aged between 19 and 33 years performed a downhill run for 45 min at 70% of their VO2max on a treadmill to induce maximal myokine blood elevations, leading to a pro-inflammatory status. Two control conditions were selected: a control run with no decline and a rest condition without physical exercise. Blood samples were taken before (T0), directly after (T1), 3 h after (T3), and 24 h after (T24) each exercise for analyzing the inflammatory response. 3 h after exercise, participants observed point-light displays (PLDs) of human interactions portraying four emotions (happiness, affection, sadness, and anger). Participants categorized the emotional content, assessed the emotional intensity of the stimuli, and indicated their confidence in their ratings. Eye movements during the entire paradigm and self-reported current mood were also recorded. Results The downhill exercise condition resulted in significant elevations of measured cytokines (IL6, CRP, MCP-1) and markers for muscle damage (Myoglobin) compared to the control running condition, indicating a pro-inflammatory state after the downhill run. Emotion recognition rates decreased significantly after the downhill run, whereas no such effect was observed after control running. Participants' sensitivity to emotion-specific cues also declined. However, the downhill run had no effect on the perceived emotional intensity or the subjective confidence in the given ratings. Visual scanning behavior was affected after the downhill run, with participants fixating more on sad stimuli, in contrast to the control conditions, where participants exhibited more fixations while observing happy stimuli. Conclusion Our study demonstrates that inflammation, induced through a downhill running model, impairs perception and emotional recognition abilities. Specifically, inflammation leads to decreased recognition rates of emotional content of social interactions, attributable to diminished discrimination capabilities across all emotional categories. Additionally, we observed alterations in visual exploration behavior. This confirms that inflammation significantly affects an individual's responsiveness to social and affective stimuli.
Collapse
Affiliation(s)
- Johannes Keck
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Celine Honekamp
- Sensorimotor Control and Learning, Centre for Cognitive Science, Technical University of Darmstadt, Germany
| | - Kristina Gebhardt
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Svenja Nolte
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Marcel Linka
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Phillips University of Marburg and Justus-Liebig-University Giessen, Germany
| | - Karsten Krüger
- Department of Exercise Physiology and Sports Therapy, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Britta Krüger
- Neuromotor Behavior Lab, Institute of Sports Science, Justus-Liebig-University Giessen, Giessen, Germany
| |
Collapse
|
2
|
Walper D, Bendixen A, Grimm S, Schubö A, Einhäuser W. Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component. J Vis 2024; 24:7. [PMID: 38848099 PMCID: PMC11166226 DOI: 10.1167/jov.24.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/19/2024] [Indexed: 06/13/2024] Open
Abstract
Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.
Collapse
Affiliation(s)
- Daniel Walper
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/SFKS/index.html.en
| | - Sabine Grimm
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception & Action, Philipps University Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuboe
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/PHKP/index.html.en
| |
Collapse
|
3
|
Broda MD, Borovska P, de Haas B. Individual differences in face salience and rapid face saccades. J Vis 2024; 24:16. [PMID: 38913016 PMCID: PMC11204136 DOI: 10.1167/jov.24.6.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/04/2024] [Indexed: 06/25/2024] Open
Abstract
Humans saccade to faces in their periphery faster than to other types of objects. Previous research has highlighted the potential importance of the upper face region in this phenomenon, but it remains unclear whether this is driven by the eye region. Similarly, it remains unclear whether such rapid saccades are exclusive to faces or generalize to other semantically salient stimuli. Furthermore, it is unknown whether individuals differ in their face-specific saccadic reaction times and, if so, whether such differences could be linked to differences in face fixations during free viewing. To explore these open questions, we invited 77 participants to perform a saccadic choice task in which we contrasted faces as well as other salient objects, particularly isolated face features and text, with cars. Additionally, participants freely viewed 700 images of complex natural scenes in a separate session, which allowed us to determine the individual proportion of first fixations falling on faces. For the saccadic choice task, we found advantages for all categories of interest over cars. However, this effect was most pronounced for images of full faces. Full faces also elicited faster saccades compared with eyes, showing that isolated eye regions are not sufficient to elicit face-like responses. Additionally, we found consistent individual differences in saccadic reaction times toward faces that weakly correlated with face salience during free viewing. Our results suggest a link between semantic salience and rapid detection, but underscore the unique status of faces. Further research is needed to resolve the mechanisms underlying rapid face saccades.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Petra Borovska
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
4
|
Guy N, Sklar AY, Amiaz R, Golan Y, Livny A, Pertzov Y. Individuals vary in their overt attention preference for positive images consistently across time and stimulus types. Sci Rep 2024; 14:8712. [PMID: 38622243 PMCID: PMC11018868 DOI: 10.1038/s41598-024-58987-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 04/05/2024] [Indexed: 04/17/2024] Open
Abstract
What humans look at strongly determines what they see. We show that individual differences in the tendency to look at positive stimuli are stable across time and across contents, establishing gaze positivity preference as a perceptual trait that determines the amount of positively valence stimuli individuals select for visual processing. Furthermore, we show that patients with major depressive disorder exhibit consistently low positivity preference before treatment. In a subset of patients, we also assessed the positivity preference after two months of treatment in which positivity gaze preference increased to levels similar to healthy individuals. We discuss the possible practical diagnostic applications of these findings, as well as how this general gaze-related trait may influence other behavioral and psychological aspects.
Collapse
Affiliation(s)
- Nitzan Guy
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Psychology, Tel Aviv University, Tel Aviv, Israel
| | - Asael Y Sklar
- Arison School of Business, Reichman University, Herzliya, Israel.
| | - Revital Amiaz
- Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
- Department of Psychiatry, Sheba Medical Center, Tel-Hashomer, Israel
| | - Yael Golan
- The Diagnostic Neuroimaging Laboratory, Sheba Medical Center, Tel-Hashomer, Israel
| | - Abigail Livny
- Faculty of Medical and Health Sciences, Tel Aviv University, Tel Aviv, Israel
- The Diagnostic Neuroimaging Laboratory, Sheba Medical Center, Tel-Hashomer, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Yoni Pertzov
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
5
|
Wang G, Foxwell MJ, Cichy RM, Pitcher D, Kaiser D. Individual differences in internal models explain idiosyncrasies in scene perception. Cognition 2024; 245:105723. [PMID: 38262271 DOI: 10.1016/j.cognition.2024.105723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 01/12/2024] [Accepted: 01/14/2024] [Indexed: 01/25/2024]
Abstract
According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences - starting from participants' personal expectations about the structure of real-world scenes.
Collapse
Affiliation(s)
- Gongting Wang
- Department of Education and Psychology, Freie Universität Berlin, Germany; Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Germany
| | | | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Germany
| | | | - Daniel Kaiser
- Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Germany; Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Germany.
| |
Collapse
|
6
|
Vallée R, Gomez T, Bourreille A, Normand N, Mouchère H, Coutrot A. Influence of training and expertise on deep neural network attention and human attention during a medical image classification task. J Vis 2024; 24:6. [PMID: 38587421 PMCID: PMC11008746 DOI: 10.1167/jov.24.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 11/19/2023] [Indexed: 04/09/2024] Open
Abstract
In many different domains, experts can make complex decisions after glancing very briefly at an image. However, the perceptual mechanisms underlying expert performance are still largely unknown. Recently, several machine learning algorithms have been shown to outperform human experts in specific tasks. But these algorithms often behave as black boxes and their information processing pipeline remains unknown. This lack of transparency and interpretability is highly problematic in applications involving human lives, such as health care. One way to "open the black box" is to compute an artificial attention map from the model, which highlights the pixels of the input image that contributed the most to the model decision. In this work, we directly compare human visual attention to machine visual attention when performing the same visual task. We have designed a medical diagnosis task involving the detection of lesions in small bowel endoscopic images. We collected eye movements from novices and gastroenterologist experts while they classified medical images according to their relevance for Crohn's disease diagnosis. We trained three state-of-the-art deep learning models on our carefully labeled dataset. Both humans and machine performed the same task. We extracted artificial attention with six different post hoc methods. We show that the model attention maps are significantly closer to human expert attention maps than to novices', especially for pathological images. As the model gets trained and its performance gets closer to the human experts, the similarity between model and human attention increases. Through the understanding of the similarities between the visual decision-making process of human experts and deep neural networks, we hope to inform both the training of new doctors and the architecture of new algorithms.
Collapse
Affiliation(s)
- Rémi Vallée
- Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France
| | - Tristan Gomez
- Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France
| | - Arnaud Bourreille
- CHU Nantes, Institut des Maladies de l'Appareil Digestif, CIC Inserm 1413, Université de Nantes, Nantes, France
| | - Nicolas Normand
- Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France
| | - Harold Mouchère
- Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France
| | - Antoine Coutrot
- Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, Nantes, France
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, Lyon, France
| |
Collapse
|
7
|
Broda MD, de Haas B. Individual differences in human gaze behavior generalize from faces to objects. Proc Natl Acad Sci U S A 2024; 121:e2322149121. [PMID: 38470925 PMCID: PMC10963009 DOI: 10.1073/pnas.2322149121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 01/22/2024] [Indexed: 03/14/2024] Open
Abstract
Individuals differ in where they fixate on a face, with some looking closer to the eyes while others prefer the mouth region. These individual biases are highly robust, generalize from the lab to the outside world, and have been associated with social cognition and associated disorders. However, it is unclear, whether these biases are specific to faces or influenced by domain-general mechanisms of vision. Here, we juxtaposed these hypotheses by testing whether individual face fixation biases generalize to inanimate objects. We analyzed >1.8 million fixations toward faces and objects in complex natural scenes from 405 participants tested in multiple labs. Consistent interindividual differences in fixation positions were highly inter-correlated across faces and objects in all samples. Observers who fixated closer to the eye region also fixated higher on inanimate objects and vice versa. Furthermore, the inter-individual spread of fixation positions scaled with target size in precisely the same, non-linear manner for faces and objects. These findings contradict a purely domain-specific account of individual face gaze. Instead, they suggest significant domain-general contributions to the individual way we look at faces, a finding with potential relevance for basic vision, face perception, social cognition, and associated clinical conditions.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Giessen35394, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen35394, Germany
- Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, Marburg35032, Germany
| |
Collapse
|
8
|
Haskins AJ, Mentch J, Van Wicklin C, Choi YB, Robertson CE. Brief Report: Differences in Naturalistic Attention to Real-World Scenes in Adolescents with 16p.11.2 Deletion. J Autism Dev Disord 2024; 54:1078-1087. [PMID: 36512194 DOI: 10.1007/s10803-022-05850-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2022] [Indexed: 12/15/2022]
Abstract
Sensory differences are nearly universal in autism, but their genetic origins are poorly understood. Here, we tested how individuals with an autism-linked genotype, 16p.11.2 deletion ("16p"), attend to visual information in immersive, real-world photospheres. We monitored participants' (N = 44) gaze while they actively explored 360° scenes via headmounted virtual reality. We modeled the visually salient and semantically meaningful information in scenes and quantified the relative bottom-up vs. top-down influences on attentional deployment. We found, when compared to typically developed control (TD) participants, 16p participants' attention was less dominantly predicted by semantically meaningful scene regions, relative to visually salient regions. These results suggest that a reduction in top-down relative to bottom-up attention characterizes how individuals with 16p.11.2 deletions engage with naturalistic visual environments.
Collapse
Affiliation(s)
- Amanda J Haskins
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA.
| | - Jeff Mentch
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA, 02115, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA
| | | | - Yeo Bi Choi
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA
| |
Collapse
|
9
|
Ghazaryan G, van Vliet M, Lammi L, Lindh-Knuutila T, Kivisaari S, Hultén A, Salmelin R. Cortical time-course of evidence accumulation during semantic processing. Commun Biol 2023; 6:1242. [PMID: 38066098 PMCID: PMC10709650 DOI: 10.1038/s42003-023-05611-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/20/2023] [Indexed: 12/18/2023] Open
Abstract
Our understanding of the surrounding world and communication with other people are tied to mental representations of concepts. In order for the brain to recognize an object, it must determine which concept to access based on information available from sensory inputs. In this study, we combine magnetoencephalography and machine learning to investigate how concepts are represented and accessed in the brain over time. Using brain responses from a silent picture naming task, we track the dynamics of visual and semantic information processing, and show that the brain gradually accumulates information on different levels before eventually reaching a plateau. The timing of this plateau point varies across individuals and feature models, indicating notable temporal variation in visual object recognition and semantic processing.
Collapse
Affiliation(s)
- Gayane Ghazaryan
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland.
| | - Marijn van Vliet
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Lotta Lammi
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Tiina Lindh-Knuutila
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Sasa Kivisaari
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Annika Hultén
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| |
Collapse
|
10
|
Roth N, Rolfs M, Hellwich O, Obermayer K. Objects guide human gaze behavior in dynamic real-world scenes. PLoS Comput Biol 2023; 19:e1011512. [PMID: 37883331 PMCID: PMC10602265 DOI: 10.1371/journal.pcbi.1011512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/12/2023] [Indexed: 10/28/2023] Open
Abstract
The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model's parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing.
Collapse
Affiliation(s)
- Nicolas Roth
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
| | - Martin Rolfs
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| | - Olaf Hellwich
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Computer Engineering and Microelectronics, Technische Universität Berlin, Germany
| | - Klaus Obermayer
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Germany
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
11
|
Pedziwiatr MA, Heer S, Coutrot A, Bex PJ, Mareschal I. Influence of prior knowledge on eye movements to scenes as revealed by hidden Markov models. J Vis 2023; 23:10. [PMID: 37721772 PMCID: PMC10511023 DOI: 10.1167/jov.23.10.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 08/14/2023] [Indexed: 09/19/2023] Open
Abstract
Human visual experience usually provides ample opportunity to accumulate knowledge about events unfolding in the environment. In typical scene perception experiments, however, participants view images that are unrelated to each other and, therefore, they cannot accumulate knowledge relevant to the upcoming visual input. Consequently, the influence of such knowledge on how this input is processed remains underexplored. Here, we investigated this influence in the context of gaze control. We used sequences of static film frames arranged in a way that allowed us to compare eye movements to identical frames between two groups: a group that accumulated prior knowledge relevant to the situations depicted in these frames and a group that did not. We used a machine learning approach based on hidden Markov models fitted to individual scanpaths to demonstrate that the gaze patterns from the two groups differed systematically and, thereby, showed that recently accumulated prior knowledge contributes to gaze control. Next, we leveraged the interpretability of hidden Markov models to characterize these differences. Additionally, we report two unexpected and interesting caveats of our approach. Overall, our results highlight the importance of recently acquired prior knowledge for oculomotor control and the potential of hidden Markov models as a tool for investigating it.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | - Sophie Heer
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| | - Antoine Coutrot
- Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, F-69621 Lyon, France
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
12
|
Borovska P, de Haas B. Faces in scenes attract rapid saccades. J Vis 2023; 23:11. [PMID: 37552021 PMCID: PMC10411644 DOI: 10.1167/jov.23.8.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/29/2023] [Indexed: 08/09/2023] Open
Abstract
During natural vision, the human visual system has to process upcoming eye movements in parallel to currently fixated stimuli. Saccades targeting isolated faces are known to have lower latency and higher velocity, but it is unclear how this generalizes to the natural cycle of saccades and fixations during free-viewing of complex scenes. To which degree can the visual system process high-level features of extrafoveal stimuli when they are embedded in visual clutter and compete with concurrent foveal input? Here, we investigated how free-viewing dynamics vary as a function of an upcoming fixation target while controlling for various low-level factors. We found strong evidence that face- versus inanimate object-directed saccades are preceded by shorter fixations and have higher peak velocity. Interestingly, the boundary conditions for these two effects are dissociated. The effect on fixation duration was limited to face saccades, which were small and followed the trajectory of the preceding one, early in a trial. This is reminiscent of a recently proposed model of perisaccadic retinotopic shifts of attention. The effect on saccadic velocity, however, extended to very large saccades and increased with trial duration. These findings suggest that multiple, independent mechanisms interact to process high-level features of extrafoveal targets and modulate the dynamics of natural vision.
Collapse
Affiliation(s)
- Petra Borovska
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| |
Collapse
|
13
|
Foroughi CK, Devlin S, Pak R, Brown NL, Sibley C, Coyne JT. Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation. HUMAN FACTORS 2023; 65:546-561. [PMID: 34348511 DOI: 10.1177/00187208211032889] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
OBJECTIVE Assess performance, trust, and visual attention during the monitoring of a near-perfect automated system. BACKGROUND Research rarely attempts to assess performance, trust, and visual attention in near-perfect automated systems even though they will be relied on in high-stakes environments. METHODS Seventy-three participants completed a 40-min supervisory control task where they monitored three search feeds. All search feeds were 100% reliable with the exception of two automation failures: one miss and one false alarm. Eye-tracking and subjective trust data were collected. RESULTS Thirty-four percent of participants correctly identified the automation miss, and 67% correctly identified the automation false alarm. Subjective trust increased when participants did not detect the automation failures and decreased when they did. Participants who detected the false alarm had a more complex scan pattern in the 2 min centered around the automation failure compared with those who did not. Additionally, those who detected the failures had longer dwell times in and transitioned to the center sensor feed significantly more often. CONCLUSION Not only does this work highlight the limitations of the human when monitoring near-perfect automated systems, it begins to quantify the subjective experience and attentional cost of the human. It further emphasizes the need to (1) reevaluate the role of the operator in future high-stakes environments and (2) understand the human on an individual level and actively design for the given individual when working with near-perfect automated systems. APPLICATION Multiple operator-level measures should be collected in real-time in order to monitor an operator's state and leverage real-time, individualized assistance.
Collapse
Affiliation(s)
| | - Shannon Devlin
- U.S. Naval Research Laboratory, Washington, DC, USA
- University of Virginia, Charlottesville, USA
| | | | | | - Ciara Sibley
- U.S. Naval Research Laboratory, Washington, DC, USA
| | | |
Collapse
|
14
|
Baker KA, Mondloch CJ. Unfamiliar face matching ability predicts the slope of face learning. Sci Rep 2023; 13:5248. [PMID: 37002382 PMCID: PMC10066355 DOI: 10.1038/s41598-023-32244-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/23/2023] [Indexed: 04/03/2023] Open
Abstract
We provide the first examination of individual differences in the efficiency of face learning. Investigating individual differences in face learning can illuminate potential mechanisms and provide greater understanding of why certain individuals might be more efficient face learners. Participants completed two unfamiliar face matching tasks and a learning task in which learning was assessed after viewing 1, 3, 6, and 9 images of to-be-learned identities. Individual differences in the slope of face learning (i.e., increases in sensitivity to identity) were predicted by the ability to discriminate between matched (same-identity) vs. mismatched (different-identity) pairs of wholly unfamiliar faces. A Dual Process Signal Detection model showed that three parameters increased with learning: Familiarity (an unconscious type of memory that varies in strength), recollection-old (conscious recognition of a learned identity), and recollection-new (conscious/confident rejection of novel identities). Good (vs. poor) matchers had higher Recollection-Old scores throughout learning and showed a steeper increase in Recollection-New. We conclude that good matchers are better able to capitalize on exposure to within-person variability in appearance, an effect that is attributable to their conscious memory for both learned and novel faces. These results have applied implications and will inform contemporary and traditional models of face identification.
Collapse
Affiliation(s)
- Kristen A Baker
- Department of Psychology, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada.
| | - Catherine J Mondloch
- Department of Psychology, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, ON, L2S 3A1, Canada
| |
Collapse
|
15
|
Guy N, Kardosh R, Sklar AY, Lancry-Dayan OC, Pertzov Y. Do we know our visual preferences? J Vis 2023; 23:9. [PMID: 36799868 PMCID: PMC9942782 DOI: 10.1167/jov.23.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023] Open
Abstract
Humans differ in the amount of time they direct their gaze toward different types of stimuli. Individuals' preferences are known to be reliable and can predict various cognitive and affective processes. However, it remains unclear whether humans are aware of their visual gaze preferences and are able to report it. In this study, across three different tasks and without prior warning, participants were asked to estimate the amount of time they had looked at a certain visual content (e.g., faces or texts) at the end of each experiment. The findings show that people can report accurately their visual gaze preferences. The implications are discussed in the context of visual perception, metacognition, and the development of applied diagnostic tools based on eye tracking.
Collapse
Affiliation(s)
- Nitzan Guy
- Cognitive and Brain Sciences Department, Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel.,Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel.,
| | - Rasha Kardosh
- Psychology Department, New York University, New York, NY, USA.,
| | - Asael Y. Sklar
- Edmond & Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel,Arison School of Business, Reichman University, Herzliya, Israel,
| | | | - Yoni Pertzov
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel.,
| |
Collapse
|
16
|
Broda MD, Haddad T, de Haas B. Quick, eyes! Isolated upper face regions but not artificial features elicit rapid saccades. J Vis 2023; 23:5. [PMID: 36749582 PMCID: PMC9919614 DOI: 10.1167/jov.23.2.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 12/06/2022] [Indexed: 02/08/2023] Open
Abstract
Human faces elicit faster saccades than objects or animals, resonating with the great importance of faces for our species. The underlying mechanisms are largely unclear. Here, we test two hypotheses based on previous findings. First, ultra-rapid saccades toward faces may not depend on the presence of the whole face, but the upper face region containing the eye region. Second, ultra-rapid saccades toward faces (and possibly face parts) may emerge from our extensive experience with this stimulus and thus extend to glasses and masks - artificial features frequently encountered as part of a face. To test these hypotheses, we asked 43 participants to complete a saccadic choice task, which contrasted images of whole, upper and lower faces, face masks, and glasses with car images. The resulting data confirmed ultra-rapid saccades for isolated upper face regions, but not for artificial facial features.
Collapse
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Theresa Haddad
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
17
|
Preference for horizontal information in faces predicts typical variations in face recognition but is not impaired in developmental prosopagnosia. Psychon Bull Rev 2023; 30:261-268. [PMID: 36002717 PMCID: PMC9971097 DOI: 10.3758/s13423-022-02163-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2022] [Indexed: 11/08/2022]
Abstract
Face recognition is strongly influenced by the processing of orientation structure in the face image. Faces are much easier to recognize when they are filtered to include only horizontally oriented information compared with vertically oriented information. Here, we investigate whether preferences for horizontal information in faces are related to face recognition abilities in a typical sample (Experiment 1), and whether such preferences are lacking in people with developmental prosopagnosia (DP; Experiment 2). Experiment 1 shows that preferences for horizontal face information are linked to face recognition abilities in a typical sample, with weak evidence of face-selective contributions. Experiment 2 shows that preferences for horizontal face information are comparable in control and DP groups. Our study suggests that preferences for horizontal face information are related to variations in face recognition abilities in the typical range, and that these preferences are not aberrant in DP.
Collapse
|
18
|
Berlijn AM, Hildebrandt LK, Gamer M. Idiosyncratic viewing patterns of social scenes reflect individual preferences. J Vis 2022; 22:10. [PMID: 36583910 PMCID: PMC9807181 DOI: 10.1167/jov.22.13.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
In general, humans preferentially look at conspecifics in naturalistic images. However, such group-based effects might conceal systematic individual differences concerning the preference for social information. Here, we investigated to what degree fixations on social features occur consistently within observers and whether this preference generalizes to other measures of social prioritization in the laboratory as well as the real world. Participants carried out a free viewing task, a relevance taps task that required them to actively select image regions that are crucial for understanding a given scene, and they were asked to freely take photographs outside the laboratory that were later classified regarding their social content. We observed stable individual differences in the fixation and active selection of human heads and faces that were correlated across tasks and partly predicted the social content of self-taken photographs. Such relationship was not observed for human bodies indicating that different social elements need to be dissociated. These findings suggest that idiosyncrasies in the visual exploration and interpretation of social features exist and predict real-world behavior. Future studies should further characterize these preferences and elucidate how they shape perception and interpretation of social contexts in healthy participants and patients with mental disorders that affect social functioning.
Collapse
Affiliation(s)
- Adam M. Berlijn
- Department of Experimental Psychology, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany,Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, University Hospital Düsseldorf, Heinrich-Heine University Düsseldorf, Düsseldorf, Germany,Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany,Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| | - Lea K. Hildebrandt
- Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| | - Matthias Gamer
- Department of Psychology, Julius-Maximilians-University Würzburg, Würzburg, Germany,
| |
Collapse
|
19
|
Hayes TR, Henderson JM. Scene inversion reveals distinct patterns of attention to semantically interpreted and uninterpreted features. Cognition 2022; 229:105231. [DOI: 10.1016/j.cognition.2022.105231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 11/03/2022]
|
20
|
Haskins AJ, Mentch J, Botch TL, Garcia BD, Burrows AL, Robertson CE. Reduced social attention in autism is magnified by perceptual load in naturalistic environments. Autism Res 2022; 15:2310-2323. [PMID: 36207799 PMCID: PMC10092155 DOI: 10.1002/aur.2829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022]
Abstract
Individuals with autism spectrum conditions (ASC) describe differences in both social cognition and sensory processing, but little is known about the causal relationship between these disparate functional domains. In the present study, we sought to understand how a core characteristic of autism-reduced social attention-is impacted by the complex multisensory signals present in real-world environments. We tested the hypothesis that reductions in social attention associated with autism would be magnified by increasing perceptual load (e.g., motion, multisensory cues). Adult participants (N = 40; 19 ASC) explored a diverse set of 360° real-world scenes in a naturalistic, active viewing paradigm (immersive virtual reality + eyetracking). Across three conditions, we systematically varied perceptual load while holding the social and semantic information present in each scene constant. We demonstrate that reduced social attention is not a static signature of the autistic phenotype. Rather, group differences in social attention emerged with increasing perceptual load in naturalistic environments, and the susceptibility of social attention to perceptual load predicted continuous measures of autistic traits across groups. Crucially, this pattern was specific to the social domain: we did not observe differential impacts of perceptual load on attention directed toward nonsocial semantic (i.e., object, place) information or low-level fixation behavior (i.e., overall fixation frequency or duration). This study provides a direct link between social and sensory processing in autism. Moreover, reduced social attention may be an inaccurate characterization of autism. Instead, our results suggest that social attention in autism is better explained by "social vulnerability," particularly to the perceptual load of real-world environments.
Collapse
Affiliation(s)
- Amanda J Haskins
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| | - Jeff Mentch
- Speech and Hearing Bioscience and Technology, Harvard University, Boston, Massachusetts, USA.,McGovern Institute for Brain Research, MIT, Cambridge, Massachusetts, USA
| | - Thomas L Botch
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| | - Alexandra L Burrows
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire, USA
| |
Collapse
|
21
|
Broda MD, de Haas B. Individual differences in looking at persons in scenes. J Vis 2022; 22:9. [DOI: 10.1167/jov.22.12.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Affiliation(s)
- Maximilian Davide Broda
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
22
|
Broda MD, de Haas B. Individual fixation tendencies in person viewing generalize from images to videos. Iperception 2022; 13:20416695221128844. [PMID: 36353505 PMCID: PMC9638695 DOI: 10.1177/20416695221128844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 09/09/2022] [Indexed: 11/06/2022] Open
Abstract
Fixation behavior toward persons in static scenes varies considerably between
individuals. However, it is unclear whether these differences generalize to
dynamic stimuli. Here, we examined individual differences in the distribution of
gaze across seven person features (i.e. body and face parts) in static and
dynamic scenes. Forty-four participants freely viewed 700 complex static scenes
followed by eight director-cut videos (28,925 frames). We determined the
presence of person features using hand-delineated pixel masks (images) and Deep
Neural Networks (videos). Results replicated highly consistent individual
differences in fixation tendencies for all person features in static scenes and
revealed that these tendencies generalize to videos. Individual fixation
behavior for both, images and videos, fell into two anticorrelated clusters
representing the tendency to fixate faces versus bodies. These results
corroborate a low-dimensional space for individual gaze biases toward persons
and show they generalize from images to videos.
Collapse
Affiliation(s)
- Maximilian D. Broda
- Department of Experimental Psychology, Justus Liebig University Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus Liebig University Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
23
|
Großekathöfer JD, Seis C, Gamer M. Reality in a sphere: A direct comparison of social attention in the laboratory and the real world. Behav Res Methods 2022; 54:2286-2301. [PMID: 34918223 PMCID: PMC9579106 DOI: 10.3758/s13428-021-01724-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2021] [Indexed: 11/24/2022]
Abstract
Humans often show reduced social attention in real situations, a finding rarely replicated in controlled laboratory studies. Virtual reality is supposed to allow for ecologically valid and at the same time highly controlled experiments. This study aimed to provide initial insights into the reliability and validity of using spherical videos viewed via a head-mounted display (HMD) to assess social attention. We chose five public places in the city of Würzburg and measured eye movements of 44 participants for 30 s at each location twice: Once in a real environment with mobile eye-tracking glasses and once in a virtual environment playing a spherical video of the location in an HMD with an integrated eye tracker. As hypothesized, participants demonstrated reduced social attention with less exploration of passengers in the real environment as compared to the virtual one. This is in line with earlier studies showing social avoidance in interactive situations. Furthermore, we only observed consistent gaze proportions on passengers across locations in virtual environments. These findings highlight that the potential for social interactions and an adherence to social norms are essential modulators of viewing behavior in social situations and cannot be easily simulated in laboratory contexts. However, spherical videos might be helpful for supplementing the range of methods in social cognition research and other fields. Data and analysis scripts are available at https://osf.io/hktdu/ .
Collapse
Affiliation(s)
- Jonas D Großekathöfer
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany.
| | - Christian Seis
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany
| | - Matthias Gamer
- Department of Psychology, Julius Maximilian University of Würzburg, Würzburg, Germany
| |
Collapse
|
24
|
Linka M, Broda MD, Alsheimer T, de Haas B, Ramon M. Characteristic fixation biases in Super-Recognizers. J Vis 2022; 22:17. [PMID: 35900724 PMCID: PMC9344214 DOI: 10.1167/jov.22.8.17] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Neurotypical observers show large and reliable individual differences in gaze behavior along several semantic object dimensions. Individual gaze behavior toward faces has been linked to face identity processing, including that of neurotypical observers. Here, we investigated potential gaze biases in Super-Recognizers (SRs), individuals with exceptional face identity processing skills. Ten SRs, identified with a novel conservative diagnostic framework, and 43 controls freely viewed 700 complex scenes depicting more than 5000 objects. First, we tested whether SRs and controls differ in fixation biases along four semantic dimensions: faces, text, objects being touched, and bodies. Second, we tested potential group differences in fixation biases toward eyes and mouths. Finally, we tested whether SRs fixate closer to the theoretical optimal fixation point for face identification. SRs showed a stronger gaze bias toward faces and away from text and touched objects, starting from the first fixation onward. Further, SRs spent a significantly smaller proportion of first fixations and dwell time toward faces on mouths but did not differ in dwell time or first fixations devoted to eyes. Face fixation of SRs also fell significantly closer to the theoretical optimal fixation point for identification, just below the eyes. Our findings suggest that reliable superiority for face identity processing is accompanied by early fixation biases toward faces and preferred saccadic landing positions close to the theoretical optimum for face identification. We discuss future directions to investigate the functional basis of individual fixation behavior and face identity processing ability.
Collapse
Affiliation(s)
- Marcel Linka
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,
| | | | - Tamara Alsheimer
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland.,
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.,
| | - Meike Ramon
- Applied Face Cognition Lab, University of Lausanne, Institute of Psychology, Lausanne, Switzerland.,
| |
Collapse
|
25
|
Merscher AS, Tovote P, Pauli P, Gamer M. Centralized gaze as an adaptive component of defensive states in humans. Proc Biol Sci 2022; 289:20220405. [PMID: 35582796 PMCID: PMC9114933 DOI: 10.1098/rspb.2022.0405] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Adequate defensive responding is crucial for mental health but scientifically not well understood. Specifically, it seems difficult to dissociate defense and approach states based on autonomic response patterns. We thus explored the robustness and threat-specificity of recently described oculomotor dynamics upon threat in anticipation of either threatening or rewarding stimuli in humans. While visually exploring naturalistic images, participants (50 per experiment) expected an inevitable, no, or avoidable shock (Experiment 1) or a guaranteed, no, or achievable reward (Experiment 2) that could be averted or gained by a quick behavioural response. We observed reduced heart rate (bradycardia), increased skin conductance, pupil dilation and globally centralized gaze when shocks were inevitable but, more pronouncedly, when they were avoidable. Reward trials were not associated with globally narrowed visual exploration, but autonomic responses resembled characteristics of the threat condition. While bradycardia and concomitant sympathetic activation reflect not only threat-related but also action-preparatory states independent of valence, global centralization of gaze seems a robust phenomenon during the anticipation of avoidable threat. Thus, instead of relying on single readouts, translational research in animals and humans should consider the multi-dimensionality of states in aversive and rewarding contexts, especially when investigating ambivalent, conflicting situations.
Collapse
Affiliation(s)
- Alma-Sophia Merscher
- Department of Psychology, University of Würzburg, Marcusstr. 9-11, 97070 Würzburg, Germany
| | - Philip Tovote
- Systems Neurobiology, Institute of Clinical Neurobiology, University Hospital Würzburg, Versbacher Str. 5, 97078 Würzburg, Germany
| | - Paul Pauli
- Department of Psychology, University of Würzburg, Marcusstr. 9-11, 97070 Würzburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Marcusstr. 9-11, 97070 Würzburg, Germany
| |
Collapse
|
26
|
Hayes TR, Henderson JM. Meaning maps detect the removal of local semantic scene content but deep saliency models do not. Atten Percept Psychophys 2022; 84:647-654. [PMID: 35138579 PMCID: PMC11128357 DOI: 10.3758/s13414-021-02395-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2021] [Indexed: 11/08/2022]
Abstract
Meaning mapping uses human raters to estimate different semantic features in scenes, and has been a useful tool in demonstrating the important role semantics play in guiding attention. However, recent work has argued that meaning maps do not capture semantic content, but like deep learning models of scene attention, represent only semantically-neutral image features. In the present study, we directly tested this hypothesis using a diffeomorphic image transformation that is designed to remove the meaning of an image region while preserving its image features. Specifically, we tested whether meaning maps and three state-of-the-art deep learning models were sensitive to the loss of semantic content in this critical diffeomorphed scene region. The results were clear: meaning maps generated by human raters showed a large decrease in the diffeomorphed scene regions, while all three deep saliency models showed a moderate increase in the diffeomorphed scene regions. These results demonstrate that meaning maps reflect local semantic content in scenes while deep saliency models do something else. We conclude the meaning mapping approach is an effective tool for estimating semantic content in scenes.
Collapse
Affiliation(s)
- Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, CA, USA.
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, CA, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
27
|
Ganczarek J, Pietras K, Stolińska A, Szubielska M. Titles and Semantic Violations Affect Eye Movements When Viewing Contemporary Paintings. Front Hum Neurosci 2022; 16:808330. [PMID: 35308608 PMCID: PMC8930854 DOI: 10.3389/fnhum.2022.808330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
The role of titles in perception of visual art is a topic of interesting discussions that brings together artists, curators, and researchers. Titles provide contextual cues and guide perception. They can be particularly useful when paintings include semantic violations that make them challenging for viewers, especially viewers lacking expert knowledge. The aim of this study is to investigate the effects of titles and semantic violations on eye movements. A total of 127 participants without expertise in visual art viewed 40 paintings with and without semantic violations (20 each) in one of three conditions: untitled, consistent titles and inconsistent titles. After each painting was viewed participants also rated liking and understanding. Our results suggest that titles affect the way paintings are viewed: both titled conditions were associated with shorter first fixation duration, longer saccade durations, and amplitudes and higher dynamic entropy than the untitled conditions. Titles were fixated on more frequently (but only in the time window between 1,200 and 2,800 ms) when presented alongside paintings with semantic violations than paintings without violations, and the percentage of fixations to titles was particularly high in the case of paintings with double inconsistencies (inconsistent titles and semantic violations). Also, we found that semantic violations attracted attention early on (300–900 ms), whereas titles received attention later (average first fixation on title was at 936.28 ms) and inconsistencies in titles were processed even later (after 4,000 ms). Finally, semantic violations were associated with higher dynamic entropy than paintings without violations. Our results demonstrate the importance of titles for processing of artworks, especially artworks that present a challenge for the viewers.
Collapse
Affiliation(s)
- Joanna Ganczarek
- Institute of Psychology, Pedagogical University of Cracow, Kraków, Poland
- *Correspondence: Joanna Ganczarek,
| | - Karolina Pietras
- Institute of Psychology, Pedagogical University of Cracow, Kraków, Poland
| | - Anna Stolińska
- Institute of Computer Science, Pedagogical University of Cracow, Kraków, Poland
| | - Magdalena Szubielska
- Institute of Psychology, The John Paul II Catholic University of Lublin, Lublin, Poland
| |
Collapse
|
28
|
Stokes JD, Rizzo A, Geng JJ, Schweitzer JB. Measuring Attentional Distraction in Children With ADHD Using Virtual Reality Technology With Eye-Tracking. FRONTIERS IN VIRTUAL REALITY 2022; 3:855895. [PMID: 35601272 PMCID: PMC9119405 DOI: 10.3389/frvir.2022.855895] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Objective Distractions inordinately impair attention in children with Attention-Deficit Hyperactivity Disorder (ADHD) but examining this behavior under real-life conditions poses a challenge for researchers and clinicians. Virtual reality (VR) technologies may mitigate the limitations of traditional laboratory methods by providing a more ecologically relevant experience. The use of eye-tracking measures to assess attentional functioning in a VR context in ADHD is novel. In this proof of principle project, we evaluate the temporal dynamics of distraction via eye-tracking measures in a VR classroom setting with 20 children diagnosed with ADHD between 8 and 12 years of age. Method We recorded continuous eye movements while participants performed math, Stroop, and continuous performance test (CPT) tasks with a series of "real-world" classroom distractors presented. We analyzed the impact of the distractors on rates of on-task performance and on-task, eye-gaze (i.e., looking at a classroom whiteboard) versus off-task eye-gaze (i.e., looking away from the whiteboard). Results We found that while children did not always look at distractors themselves for long periods of time, the presence of a distractor disrupted on-task gaze at task-relevant whiteboard stimuli and lowered rates of task performance. This suggests that children with attention deficits may have a hard time returning to tasks once those tasks are interrupted, even if the distractor itself does not hold attention. Eye-tracking measures within the VR context can reveal rich information about attentional disruption. Conclusions Leveraging virtual reality technology in combination with eye-tracking measures is well-suited to advance the understanding of mechanisms underlying attentional impairment in naturalistic settings. Assessment within these immersive and well-controlled simulated environments provides new options for increasing our understanding of distractibility and its potential impact on the development of interventions for children with ADHD.
Collapse
Affiliation(s)
- Jared D. Stokes
- MIND Institute, University of California, Davis, Sacramento, CA, United States
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA, United States
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Albert Rizzo
- Institute for Creative Studies, University of Southern California, Los Angeles, CA, United States
| | - Joy J. Geng
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| | - Julie B. Schweitzer
- MIND Institute, University of California, Davis, Sacramento, CA, United States
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA, United States
| |
Collapse
|
29
|
Pedziwiatr MA, Kümmerer M, Wallis TSA, Bethge M, Teufel C. Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps. J Vis 2022; 22:9. [PMID: 35171232 PMCID: PMC8857618 DOI: 10.1167/jov.22.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Semantic information is important in eye movement control. An important semantic influence on gaze guidance relates to object-scene relationships: objects that are semantically inconsistent with the scene attract more fixations than consistent objects. One interpretation of this effect is that fixations are driven toward inconsistent objects because they are semantically more informative. We tested this explanation using contextualized meaning maps, a method that is based on crowd-sourced ratings to quantify the spatial distribution of context-sensitive “meaning” in images. In Experiment 1, we compared gaze data and contextualized meaning maps for images, in which objects-scene consistency was manipulated. Observers fixated more on inconsistent versus consistent objects. However, contextualized meaning maps did not assign higher meaning to image regions that contained semantic inconsistencies. In Experiment 2, a large number of raters evaluated image-regions, which were deliberately selected for their content and expected meaningfulness. The results suggest that the same scene locations were experienced as slightly less meaningful when they contained inconsistent compared to consistent objects. In summary, we demonstrated that — in the context of our rating task — semantically inconsistent objects are experienced as less meaningful than their consistent counterparts and that contextualized meaning maps do not capture prototypical influences of image meaning on gaze guidance.
Collapse
Affiliation(s)
- Marek A Pedziwiatr
- Cardiff University, Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff, UK.,Queen Mary University of London, Department of Biological and Experimental Psychology, London, UK.,
| | | | - Thomas S A Wallis
- Technical University of Darmstadt, Institute for Psychology and Centre for Cognitive Science, Darmstadt, Germany.,
| | | | - Christoph Teufel
- Cardiff University, Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff, UK.,
| |
Collapse
|
30
|
Humphreys L, Higgins SJ, Roberts EV. EXPRESS: Task demands moderate the effect of emotion on attentional capture. Q J Exp Psychol (Hove) 2022; 75:2308-2317. [PMID: 35001737 DOI: 10.1177/17470218221075146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The current experiment examined the effect of task demands on attention to emotional images. Eighty participants viewed pairs of images, with each pair consisting of an emotional (negative or positive) and a neutral image, or two neutral images. Participants' eye movements were recorded during picture viewing, and participants were either asked 1) which picture contains more colour? (colour task), 2) are the images equally pleasant? (pleasantness task), 3) which picture do you prefer? (preference task), or 4) were given no task instructions (control task). Although the results did not suggest that emotional images strongly captured attention, emotional images were looked at earlier than neutral images. Importantly, the pattern of results were dependent upon the task instructions; whilst the preference and colour task conditions showed early attentional biases to emotional images, only positive images were looked at earlier in the pleasantness task condition, and no early attentional biases were observed in the control task. Moreover, total fixation duration was increased for positive images in the preference task condition, but not in the other task conditions. It was concluded that attention to emotional stimuli can be modified by the demands of the task during viewing. However, further research should consider additional factors, such as the cognitive load of the viewing tasks, and the content of the images used.
Collapse
Affiliation(s)
- Louise Humphreys
- Psychology Department, Staffordshire University, Stoke-on-Trent, United Kingdom 7703
| | - Sarah Jade Higgins
- Psychology Department, Staffordshire University, Stoke-on-Trent, United Kingdom 7703
| | - Emma Victoria Roberts
- Psychology Department, Staffordshire University, Stoke-on-Trent, United Kingdom 7703
| |
Collapse
|
31
|
Rusch KM. Combining fMRI and Eye-tracking for the Study of Social Cognition. Neurosci Insights 2021; 16:26331055211065497. [PMID: 34950876 PMCID: PMC8689432 DOI: 10.1177/26331055211065497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 11/22/2021] [Indexed: 11/25/2022] Open
Abstract
The study of social cognition with functional magnetic resonance imaging (fMRI) affords
the use of complex stimulus material. Visual attention to distinct aspects of these
stimuli can result in the involvement of remarkably different neural systems. Usually, the
influence of gaze on neural signal is either disregarded or dealt with by controlling gaze
of participants through instructions or tasks. However, behavioral restrictions like this
limit the study’s ecological validity. Thus, it would be preferable if participants freely
look at the stimuli while their gaze traces are measured. Yet several impediments hamper a
combination of fMRI and eye-tracking. In our recent work on neural Theory of Mind
processes in alexithymia, we propose a simple way of integrating dwell time on specific
stimulus features into general linear models of fMRI data. By parametrically modeling
fixations, we were able to distinguish neural processes asssociated with specific stimulus
features looked at. Here, I discuss opportunities and obstacles of this approach in more
detail. My goal is to motivate a wider use of parametric models — usually implemented in
common fMRI software packages — to combine fMRI and eye-tracking data.
Collapse
Affiliation(s)
- Kristin Marie Rusch
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany.,Department of Neurology and Neurorehabilitation, Hospital zum Heiligen Geist, Academic Teaching Hospital of the Heinrich-Heine-University Düsseldorf, Kempen, Germany.,Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
| |
Collapse
|
32
|
Kothinti SR, Huang N, Elhilali M. Auditory salience using natural scenes: An online study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2952. [PMID: 34717500 PMCID: PMC8528551 DOI: 10.1121/10.0006750] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Salience is the quality of a sensory signal that attracts involuntary attention in humans. While it primarily reflects conspicuous physical attributes of a scene, our understanding of processes underlying what makes a certain object or event salient remains limited. In the vision literature, experimental results, theoretical accounts, and large amounts of eye-tracking data using rich stimuli have shed light on some of the underpinnings of visual salience in the brain. In contrast, studies of auditory salience have lagged behind due to limitations in both experimental designs and stimulus datasets used to probe the question of salience in complex everyday soundscapes. In this work, we deploy an online platform to study salience using a dichotic listening paradigm with natural auditory stimuli. The study validates crowd-sourcing as a reliable platform to collect behavioral responses to auditory salience by comparing experimental outcomes to findings acquired in a controlled laboratory setting. A model-based analysis demonstrates the benefits of extending behavioral measures of salience to broader selection of auditory scenes and larger pools of subjects. Overall, this effort extends our current knowledge of auditory salience in everyday soundscapes and highlights the limitations of low-level acoustic attributes in capturing the richness of natural soundscapes.
Collapse
Affiliation(s)
- Sandeep Reddy Kothinti
- Department of Electrical and Computer Engineering, Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Nicholas Huang
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| |
Collapse
|
33
|
Zangrossi A, Cona G, Celli M, Zorzi M, Corbetta M. Visual exploration dynamics are low-dimensional and driven by intrinsic factors. Commun Biol 2021; 4:1100. [PMID: 34535744 PMCID: PMC8448835 DOI: 10.1038/s42003-021-02608-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 08/17/2021] [Indexed: 02/08/2023] Open
Abstract
When looking at visual images, the eyes move to the most salient and behaviourally relevant objects. Saliency and semantic information significantly explain where people look. Less is known about the spatiotemporal properties of eye movements (i.e., how people look). We show that three latent variables explain 60% of eye movement dynamics of more than a hundred observers looking at hundreds of different natural images. The first component explaining 30% of variability loads on fixation duration, and it does not relate to image saliency or semantics; it approximates a power-law distribution of gaze steps, an intrinsic dynamic measure, and identifies observers with two viewing styles: static and dynamic. Notably, these viewing styles were also identified when observers look at a blank screen. These results support the importance of endogenous processes such as intrinsic dynamics to explain eye movement spatiotemporal properties.
Collapse
Affiliation(s)
- Andrea Zangrossi
- grid.5608.b0000 0004 1757 3470Department of Neuroscience, University of Padova, Padova, Italy ,grid.5608.b0000 0004 1757 3470Padova Neuroscience Center (PNC), University of Padova, Padova, Italy ,grid.428736.cVenetian Institute of Molecular Medicine, VIMM, Padova, Italy
| | - Giorgia Cona
- grid.5608.b0000 0004 1757 3470Padova Neuroscience Center (PNC), University of Padova, Padova, Italy ,grid.5608.b0000 0004 1757 3470Department of General Psychology, University of Padova, Padova, Italy
| | - Miriam Celli
- grid.5608.b0000 0004 1757 3470Padova Neuroscience Center (PNC), University of Padova, Padova, Italy ,grid.428736.cVenetian Institute of Molecular Medicine, VIMM, Padova, Italy
| | - Marco Zorzi
- grid.5608.b0000 0004 1757 3470Department of General Psychology, University of Padova, Padova, Italy ,grid.492797.6IRCCS San Camillo Hospital, Venice, Italy
| | - Maurizio Corbetta
- grid.5608.b0000 0004 1757 3470Department of Neuroscience, University of Padova, Padova, Italy ,grid.5608.b0000 0004 1757 3470Padova Neuroscience Center (PNC), University of Padova, Padova, Italy ,grid.428736.cVenetian Institute of Molecular Medicine, VIMM, Padova, Italy
| |
Collapse
|
34
|
Goettker A, Gegenfurtner KR. A change in perspective: The interaction of saccadic and pursuit eye movements in oculomotor control and perception. Vision Res 2021; 188:283-296. [PMID: 34489101 DOI: 10.1016/j.visres.2021.08.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/26/2021] [Accepted: 08/16/2021] [Indexed: 11/17/2022]
Abstract
Due to the close relationship between oculomotor behavior and visual processing, eye movements have been studied in many different areas of research over the last few decades. While these studies have brought interesting insights, specialization within each research area comes at the potential cost of a narrow and isolated view of the oculomotor system. In this review, we want to expand this perspective by looking at the interactions between the two most important types of voluntary eye movements: saccades and pursuit. Recent evidence indicates multiple interactions and shared signals at the behavioral and neurophysiological level for oculomotor control and for visual perception during pursuit and saccades. Oculomotor control seems to be based on shared position- and velocity-related information, which leads to multiple behavioral interactions and synergies. The distinction between position- and velocity-related information seems to be also present at the neurophysiological level. In addition, visual perception seems to be based on shared efferent signals about upcoming eye positions and velocities, which are to some degree independent of the actual oculomotor response. This review suggests an interactive perspective on the oculomotor system, based mainly on different types of sensory input, and less so on separate subsystems for saccadic or pursuit eye movements.
Collapse
Affiliation(s)
- Alexander Goettker
- Abteilung Allgemeine Psychologie and Center for Mind, Brain & Behavior, Justus-Liebig University Giessen, Germany.
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie and Center for Mind, Brain & Behavior, Justus-Liebig University Giessen, Germany
| |
Collapse
|
35
|
Zimmermann KM, Schmidt KD, Gronow F, Sommer J, Leweke F, Jansen A. Seeing things differently: Gaze shapes neural signal during mentalizing according to emotional awareness. Neuroimage 2021; 238:118223. [PMID: 34098065 DOI: 10.1016/j.neuroimage.2021.118223] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 05/26/2021] [Accepted: 05/29/2021] [Indexed: 12/19/2022] Open
Abstract
Studies on social cognition often use complex visual stimuli to asses neural processes attributed to abilities like "mentalizing" or "Theory of Mind" (ToM). During the processing of these stimuli, eye gaze, however, shapes neural signal patterns. Individual differences in neural operations on social cognition may therefore be obscured if individuals' gaze behavior differs systematically. These obstacles can be overcome by the combined analysis of neural signal and natural viewing behavior. Here, we combined functional magnetic resonance imaging (fMRI) with eye-tracking to examine effects of unconstrained gaze on neural ToM processes in healthy individuals with differing levels of emotional awareness, i.e. alexithymia. First, as previously described for emotional tasks, people with higher alexithymia levels look less at eyes in both ToM and task-free viewing contexts. Further, we find that neural ToM processes are not affected by individual differences in alexithymia per se. Instead, depending on alexithymia levels, gaze on critical stimulus aspects reversely shapes the signal in medial prefrontal cortex (MPFC) and anterior temporoparietal junction (TPJ) as distinct nodes of the ToM system. These results emphasize that natural selective attention affects fMRI patterns well beyond the visual system. Our study implies that, whenever using a task with multiple degrees of freedom in scan paths, ignoring the latter might obscure important conclusions.
Collapse
Affiliation(s)
- Kristin Marie Zimmermann
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Department of Neurology and Neurorehabilitation, Hospital zum Heiligen Geist, Academic Teaching Hospital of the Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen.
| | - Kirsten Daniela Schmidt
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany
| | - Franziska Gronow
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
| | - Jens Sommer
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen; Core-Unit Brainimaging, Faculty of Medicine, University of Marburg, Marburg, Germany
| | - Frank Leweke
- Clinic for Psychosomatic Medicine and Psychotherapy, Justus Liebig University Giessen, Giessen, Germany
| | - Andreas Jansen
- Laboratory for Multimodal Neuroimaging, Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen; Core-Unit Brainimaging, Faculty of Medicine, University of Marburg, Marburg, Germany
| |
Collapse
|
36
|
Linka M, de Haas B. OSIEshort: A small stimulus set can reliably estimate individual differences in semantic salience. J Vis 2021; 20:13. [PMID: 32945849 PMCID: PMC7509791 DOI: 10.1167/jov.20.9.13] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recent findings revealed consistent individual differences in fixation tendencies among observers free-viewing complex scenes. The present study aimed at (1) replicating these differences, and (2) testing whether they can be estimated using a shorter test. In total, 103 participants completed two eye-tracking sessions. The first session was a direct replication of the original study, but the second session used a smaller subset of images, optimized to capture individual differences efficiently. The first session replicated the large and consistent individual differences along five semantic dimensions observed in the original study. The second session showed that these differences can be estimated using about 40 to 100 images (depending on the tested dimension). Additional analyses revealed that only the first 2 seconds of viewing duration seem to be informative regarding these differences. Taken together, our findings suggest that reliable individual differences in semantic salience can be estimated with a test totaling less than 2 minutes of viewing duration.
Collapse
Affiliation(s)
- Marcel Linka
- Experimental Psychology, Justus Liebig Universität, Giessen, Germany
| | - Benjamin de Haas
- Experimental Psychology, Justus Liebig Universität, Giessen, Germany
| |
Collapse
|
37
|
Drewes J, Feder S, Einhäuser W. Gaze During Locomotion in Virtual Reality and the Real World. Front Neurosci 2021; 15:656913. [PMID: 34108857 PMCID: PMC8180583 DOI: 10.3389/fnins.2021.656913] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/27/2021] [Indexed: 11/20/2022] Open
Abstract
How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.
Collapse
Affiliation(s)
- Jan Drewes
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Sascha Feder
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
38
|
Abstract
Studies on colored transparent objects have elucidated potential mechanisms, but these studies have mainly focused on flat filters overlaying flat backgrounds. While they have provided valuable insight, these studies have not captured all aspects of transparency, like caustics, specular reflections/highlights, and shadows. Here, we investigate color-matching experiments with curved transparent objects for different matching stimuli: a uniform patch and a flat filter. Two instructions were tested: simply match the color of the glass object and the test element (patch and flat filter) or match the color of the dye that was used to tint the transparent object (patch). Observers’ matches differed from the mean, the most frequent, and the most saturated color of the transparent stimuli, whereas the brightest regions captured the chromaticity, but not the lightness, of patch matches. We applied four models from flat filter studies: the convergence model, the ratios of either the means (RMC) or standard deviations (RSD) of cone excitations, and a robust ratio model. The original convergence model does not fully generalize but does not perform poorly, and with modifications, we find that curved transparent objects cause a convergence of filtered colors toward a point in color space, similar to flat filters. Considering that, the RMC and robust ratio models generalized more than the RSD, with the RMC performing best across the stimuli we tested. We conclude that the RMC is probably the strongest factor for determining the color. The RSD seems instead to be related to the perceived “clarity” of glass objects.
Collapse
Affiliation(s)
- Robert Ennis
- Justus-Liebig-Universitaet Giessen, Department of General Psychology, Giessen, Germany.,
| | - Katja Doerschner
- Justus-Liebig-Universitaet Giessen, Department of General Psychology, Giessen, Germany.,National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey.,
| |
Collapse
|
39
|
Rösler L, Göhring S, Strunz M, Gamer M. Social anxiety is associated with heart rate but not gaze behavior in a real social interaction. J Behav Ther Exp Psychiatry 2021; 70:101600. [PMID: 32882674 PMCID: PMC7689581 DOI: 10.1016/j.jbtep.2020.101600] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 06/26/2020] [Accepted: 07/07/2020] [Indexed: 11/11/2022]
Abstract
BACKGROUND AND OBJECTIVES Much of our current understanding of social anxiety rests on the use of highly restricted laboratory experiments. Latest technological developments now allow the investigation of eye movements and physiological measures during real social interactions. Considering the wealth of conflicting findings on gaze behavior in social anxiety, the current study aimed at elucidating the modulation of gaze patterns in a naturalistic setting. METHODS We introduced 71 participants with differing social anxiety symptoms to a waiting room situation while recording heart rate, electrodermal activity and eye movements using mobile technology. RESULTS We observed fewer fixations on the head of the confederate in the initial waiting phase of the experiment. These head fixations increased when the confederate was involved in a phone call and subsequently initiated an actual conversation. Contrary to gaze-avoidance models of social anxiety, we did not observe any correlations between social anxiety and visual attention but an elevated heart rate in participants with high social anxiety. LIMITATIONS Although social anxiety varied considerably in the current sample and reached clinically relevant levels in one third of participants, formal clinical diagnoses were not available. CONCLUSIONS The current findings suggest that gaze avoidance might only occur in specific situations or very high levels of social anxiety. Fear of eye contact could at times represent a subjectively experienced rather than an objectively measurable feature of the disorder. The observation of elevated heart rate throughout the entire experiment indicates that physiological hyperactivity might constitute a cardinal feature of social anxiety.
Collapse
Affiliation(s)
- Lara Rösler
- Department of Psychology, Julius Maximilians University of Würzburg, Würzburg, Germany.
| | | | | | | |
Collapse
|
40
|
Levine SM, Schwarzbach JV. Individualizing Representational Similarity Analysis. Front Psychiatry 2021; 12:729457. [PMID: 34707520 PMCID: PMC8542717 DOI: 10.3389/fpsyt.2021.729457] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/10/2021] [Indexed: 11/13/2022] Open
Abstract
Representational similarity analysis (RSA) is a popular multivariate analysis technique in cognitive neuroscience that uses functional neuroimaging to investigate the informational content encoded in brain activity. As RSA is increasingly being used to investigate more clinically-geared questions, the focus of such translational studies turns toward the importance of individual differences and their optimization within the experimental design. In this perspective, we focus on two design aspects: applying individual vs. averaged behavioral dissimilarity matrices to multiple participants' neuroimaging data and ensuring the congruency between tasks when measuring behavioral and neural representational spaces. Incorporating these methods permits the detection of individual differences in representational spaces and yields a better-defined transfer of information from representational spaces onto multivoxel patterns. Such design adaptations are prerequisites for optimal translation of RSA to the field of precision psychiatry.
Collapse
Affiliation(s)
- Seth M Levine
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jens V Schwarzbach
- Department of Psychiatry and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
41
|
Salience-based object prioritization during active viewing of naturalistic scenes in young and older adults. Sci Rep 2020; 10:22057. [PMID: 33328485 PMCID: PMC7745017 DOI: 10.1038/s41598-020-78203-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 11/18/2020] [Indexed: 11/21/2022] Open
Abstract
Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of salience to fixation selection in scenes. When analysing fixation guidance without recurrence to objects, visual salience predicted whether image patches were fixated or not. This effect was reduced for the elderly, replicating an earlier finding. When using objects as the unit of analysis, we found that highly salient objects were more frequently selected for fixation than objects with low visual salience. Interestingly, this effect was larger for older adults. We also analysed where viewers fixate within objects, once they are selected. A preferred viewing location close to the centre of the object was found for both age groups. The results support the view that objects are important units of saccadic selection. Reconciling the salience view with the object view, we suggest that visual salience contributes to prioritization among objects. Moreover, the data point towards an increasing relevance of object-bound information with increasing age.
Collapse
|
42
|
Guy N, Lancry-Dayan OC, Pertzov Y. Not all fixations are created equal: The benefits of using ex-Gaussian modeling of fixation durations. J Vis 2020; 20:9. [PMID: 33022042 PMCID: PMC7545065 DOI: 10.1167/jov.20.10.9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Various cognitive and perceptual factors have been shown to modulate the duration of fixations during visual exploration of complex scenes. The majority of these studies have only considered the mean of the distribution of fixation durations. However, this distribution is skewed to the right, so that an increase in the mean may be driven by a lengthening of all fixations (i.e., a right shift of the whole distribution) or only the relatively longer ones (i.e., a longer right tail of the distribution). To determine which factor is at play, the distribution can be modeled with an ex-Gaussian distribution, which is a convolution of a Gaussian and an exponential distribution. Here we demonstrate the usefulness of applying the ex-Gaussian model to empirical distributions of fixation durations and the reliability of its parameters across time. We demonstrate how the ex-Gaussian model had advantages over exclusive consideration of the mean, by showing that an increase in the mean can stem from specific changes in the components of the ex-Gaussian distribution. Specifically, the type of image leads to a change in the Gaussian component alone, indicating a right shift of the main mass of the distribution. By contrast, familiarity with the inspected image modifies the exponential component, and results in a more specific modulation of a subset of relatively long fixations. Hence, estimating the ex-Gaussian parameters may provide novel insights into the underlying processes that determine fixation duration and can contribute to the future development of process-based computational models of gaze behavior.
Collapse
Affiliation(s)
- Nitzan Guy
- Department of Psychology, the Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
| | | | - Yoni Pertzov
- Department of Psychology, the Hebrew University of Jerusalem, Jerusalem, Israel
- https://www.pertzov.com/
| |
Collapse
|
43
|
Concealed information revealed by involuntary eye movements on the fringe of awareness in a mock terror experiment. Sci Rep 2020; 10:14355. [PMID: 32873884 PMCID: PMC7463231 DOI: 10.1038/s41598-020-71487-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 08/14/2020] [Indexed: 11/16/2022] Open
Abstract
Involuntary eye movements during fixation are typically inhibited following stimulus onset (Oculomotor Inhibition, OMI), depending on the stimulus saliency and attention, with an earlier and longer OMI for barely visible familiar faces. However, it is still unclear whether OMI regarding familiarities and perceptual saliencies differ enough to allow a reliable OMI-based concealed information test (CIT). In a “mock terror” experiment with 25 volunteers, 13 made a concealed choice of a “terror-target” (one of eight), associated with 3 probes (face, name, and residence), which they learned watching text and videos, whereas 12 “innocents” pre-learned nothing. All participants then watched ~ 25 min of repeated brief presentations of barely visible (masked) stimuli that included the 8 potential probes, as well as a universally familiar face as a reference, while their eye movements were monitored. We found prolonged and deviant OMI regarding the probes. Incorporated with the individual pattern of responses to the reference, our analysis correctly identified 100% of the terror targets, and was 95% correct in discriminating “terrorists” from “innocents”. Our results provide a “proof of concept” for a novel approach to CIT, based on involuntary oculomotor responses to barely visible stimuli, individually tailored, and with high accuracy and theoretical resistance to countermeasures.
Collapse
|
44
|
Goettker A, Agtzidis I, Braun DI, Dorr M, Gegenfurtner KR. From Gaussian blobs to naturalistic videos: Comparison of oculomotor behavior across different stimulus complexities. J Vis 2020; 20:26. [PMID: 32845961 PMCID: PMC7453049 DOI: 10.1167/jov.20.8.26] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 07/20/2020] [Indexed: 11/24/2022] Open
Abstract
Research on eye movements has primarily been performed in two distinct ways: (1) under highly controlled conditions using simple stimuli such as dots on a uniform background, or (2) under free-viewing conditions with complex images, real-world movies, or even with observers moving around in the world. Although both approaches offer important insights, the generalizability among eye movement behaviors observed under these different conditions is unclear. Here, we compared eye movement responses to video clips showing moving objects within their natural context with responses to simple Gaussian blobs on a blank screen. Importantly, for both conditions, the targets moved along the same trajectories at the same speed. We measured standard oculometric measures for both stimulus complexities, as well as the effect of the relative angle between saccades and pursuit, and compared them across conditions. In general, eye movement responses were qualitatively similar, especially with respect to pursuit gain. For both types of stimuli, the accuracy of saccades and subsequent pursuit was highest when both eye movements were collinear. We also found interesting differences; for example, latencies of initial saccades to moving Gaussian blob targets were significantly faster compared to saccades to moving objects in video scenes, whereas pursuit accuracy was significantly higher in video scenes. These findings suggest a lower processing demand for simple target conditions during saccade preparation and an advantage for tracking behavior in natural scenes due to higher predictability provided by the context information.
Collapse
Affiliation(s)
- Alexander Goettker
- Abteilung Allgemeine Psychologie, Justus-Liebig University, Gießen, Germany
| | | | - Doris I. Braun
- Abteilung Allgemeine Psychologie, Justus-Liebig University, Gießen, Germany
| | | | | |
Collapse
|
45
|
Rigby SN, Jakobson LS, Pearson PM, Stoesz BM. Alexithymia and the Evaluation of Emotionally Valenced Scenes. Front Psychol 2020; 11:1820. [PMID: 32793083 PMCID: PMC7394003 DOI: 10.3389/fpsyg.2020.01820] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 07/01/2020] [Indexed: 01/15/2023] Open
Abstract
Alexithymia is a personality trait characterized by difficulties identifying and describing feelings (DIF and DDF) and an externally oriented thinking (EOT) style. The primary aim of the present study was to investigate links between alexithymia and the evaluation of emotional scenes. We also investigated whether viewers' evaluations of emotional scenes were better predicted by specific alexithymic traits or by individual differences in sensory processing sensitivity (SPS). Participants (N = 106) completed measures of alexithymia and SPS along with a task requiring speeded judgments of the pleasantness of 120 moderately arousing scenes. We did not replicate laterality effects previously described with the scene perception task. Compared to those with weak alexithymic traits, individuals with moderate-to-strong alexithymic traits were less likely to classify positively valenced scenes as pleasant and were less likely to classify scenes with (vs. without) implied motion (IM) in a way that was consistent with normative scene valence ratings. In addition, regression analyses confirmed that reporting strong EOT and a tendency to be easily overwhelmed by busy sensory environments negatively predicted classification accuracy for positive scenes, and that both DDF and EOT negatively predicted classification accuracy for scenes depicting IM. These findings highlight the importance of accounting for stimulus characteristics and individual differences in specific traits associated with alexithymia and SPS when investigating the processing of emotional stimuli. Learning more about the links between these individual difference variables may have significant clinical implications, given that alexithymia is an important, transdiagnostic risk factor for a wide range of psychopathologies.
Collapse
Affiliation(s)
- Sarah N Rigby
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| | - Lorna S Jakobson
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| | - Pauline M Pearson
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada.,Department of Psychology, University of Winnipeg, Winnipeg, MB, Canada
| | - Brenda M Stoesz
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada.,Centre for the Advancement of Teaching and Learning, University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
46
|
Hayes TR, Henderson JM. Center bias outperforms image salience but not semantics in accounting for attention during scene viewing. Atten Percept Psychophys 2020; 82:985-994. [PMID: 31456175 PMCID: PMC11149060 DOI: 10.3758/s13414-019-01849-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is 'pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743-747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.
Collapse
Affiliation(s)
- Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, CA, USA.
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, CA, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
47
|
Wegner-Clemens K, Rennig J, Magnotti JF, Beauchamp MS. Using principal component analysis to characterize eye movement fixation patterns during face viewing. J Vis 2019; 19:2. [PMID: 31689715 PMCID: PMC6833982 DOI: 10.1167/19.13.2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 08/23/2019] [Indexed: 01/22/2023] Open
Abstract
Human faces contain dozens of visual features, but viewers preferentially fixate just two of them: the eyes and the mouth. Face-viewing behavior is usually studied by manually drawing regions of interest (ROIs) on the eyes, mouth, and other facial features. ROI analyses are problematic as they require arbitrary experimenter decisions about the location and number of ROIs, and they discard data because all fixations within each ROI are treated identically and fixations outside of any ROI are ignored. We introduce a data-driven method that uses principal component analysis (PCA) to characterize human face-viewing behavior. All fixations are entered into a PCA, and the resulting eigenimages provide a quantitative measure of variability in face-viewing behavior. In fixation data from 41 participants viewing four face exemplars under three stimulus and task conditions, the first principal component (PC1) separated the eye and mouth regions of the face. PC1 scores varied widely across participants, revealing large individual differences in preference for eye or mouth fixation, and PC1 scores varied by condition, revealing the importance of behavioral task in determining fixation location. Linear mixed effects modeling of the PC1 scores demonstrated that task condition accounted for 41% of the variance, individual differences accounted for 28% of the variance, and stimulus exemplar for less than 1% of the variance. Fixation eigenimages provide a useful tool for investigating the relative importance of the different factors that drive human face-viewing behavior.
Collapse
Affiliation(s)
- Kira Wegner-Clemens
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX
| | - Johannes Rennig
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX
| | - John F Magnotti
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX
| | - Michael S Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX
| |
Collapse
|
48
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
49
|
A novel perceptual trait: gaze predilection for faces during visual exploration. Sci Rep 2019; 9:10714. [PMID: 31341217 PMCID: PMC6656722 DOI: 10.1038/s41598-019-47110-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 07/10/2019] [Indexed: 01/08/2023] Open
Abstract
Humans are social animals and typically tend to seek social interactions. In our daily life we constantly move our gaze to collect visual information which often includes social information, such as others’ emotions and intentions. Recent studies began to explore how individuals vary in their gaze behavior. However, these studies focused on basic features of eye movements (such as the length of movements) and did not examine the observer predilection for specific social features such as faces. We preformed two test-retest experiments examining the amount of time individuals fixate directly on faces embedded in images of naturally occurring scenes. We report on stable and robust individual differences in visual predilection for faces across time and tasks. Individuals’ preference to fixate on faces could not be explained by a preference for fixating on low-level salient regions (e.g. color, intensity, orientation) nor by individual differences in the Big-Five personality traits. We conclude that during visual exploration individuals vary in the amount of time they direct their gaze towards faces. This tendency is a trait that not only reflects individuals’ preferences but also influences the amount of information gathered by each observer, therefore influencing the basis for later cognitive processing and decisions.
Collapse
|