1
|
Akselevich V, Gilaie-Dotan S. Positive and negative facial valence perception are modulated differently by eccentricity in the parafovea: Replication from KDEF to NimStim. Sci Rep 2024; 14:13757. [PMID: 38877079 PMCID: PMC11178822 DOI: 10.1038/s41598-024-63724-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/31/2024] [Indexed: 06/16/2024] Open
Abstract
While perceiving the emotional state of others may be crucial for our behavior even when this information is present outside of central vision, emotion perception studies typically focus on central visual field. We have recently investigated emotional valence (pleasantness) perception across the parafovea (≤ 4°) and found that for briefly presented (200 ms) emotional face images (from the established KDEF image-set), positive (happy) valence was the least affected by eccentricity (distance from the central visual field) and negative (fearful) valence the most. Furthermore, we found that performance at 2° predicted performance at 4°. Here we tested (n = 37) whether these effects replicate with face stimuli of different identities from a different well-established image-set (NimStim). All our prior findings replicated and eccentricity-based modulation magnitude was smaller with NimStim (~ 16.6% accuracy reduction at 4°) than with KDEF stimuli (~ 27.3% reduction). Our current investigations support our earlier findings that for briefly presented parafoveal stimuli, positive and negative valence perception are differently affected by eccentricity and may be dissociated. Furthermore, our results highlight the importance of investigating emotions beyond central vision and demonstrate commonalities and differences across different image sets in the parafovea, emphasizing the contribution of replication studies to substantiate our knowledge about perceptual mechanisms.
Collapse
Affiliation(s)
- Vasilisa Akselevich
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
2
|
Brook L, Kreichman O, Masarwa S, Gilaie-Dotan S. Higher-contrast images are better remembered during naturalistic encoding. Sci Rep 2024; 14:13445. [PMID: 38862623 PMCID: PMC11166978 DOI: 10.1038/s41598-024-63953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
It is unclear whether memory for images of poorer visibility (as low contrast or small size) will be lower due to weak signals elicited in early visual processing stages, or perhaps better since their processing may entail top-down processes (as effort and attention) associated with deeper encoding. We have recently shown that during naturalistic encoding (free viewing without task-related modulations), for image sizes between 3°-24°, bigger images stimulating more visual system processing resources at early processing stages are better remembered. Similar to size, higher contrast leads to higher activity in early visual processing. Therefore, here we hypothesized that during naturalistic encoding, at critical visibility ranges, higher contrast images will lead to higher signal-to-noise ratio and better signal quality flowing downstream and will thus be better remembered. Indeed, we found that during naturalistic encoding higher contrast images were remembered better than lower contrast ones (~ 15% higher accuracy, ~ 1.58 times better) for images at 7.5-60 RMS contrast range. Although image contrast and size modulate early visual processing very differently, our results further substantiate that at poor visibility ranges, during naturalistic non-instructed visual behavior, physical image dimensions (contributing to image visibility) impact image memory.
Collapse
Affiliation(s)
- Limor Brook
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
3
|
Urtado MB, Rodrigues RD, Fukusima SS. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behav Sci (Basel) 2024; 14:355. [PMID: 38785846 PMCID: PMC11117586 DOI: 10.3390/bs14050355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/05/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level.
Collapse
Affiliation(s)
- Melina Boratto Urtado
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| | | | - Sergio Sheiji Fukusima
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| |
Collapse
|
4
|
Kreichman O, Gilaie‐Dotan S. Parafoveal vision reveals qualitative differences between fusiform face area and parahippocampal place area. Hum Brain Mapp 2024; 45:e26616. [PMID: 38379465 PMCID: PMC10879909 DOI: 10.1002/hbm.26616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 01/02/2024] [Accepted: 01/22/2024] [Indexed: 02/22/2024] Open
Abstract
The center-periphery visual field axis guides early visual system organization with enhanced resources devoted to central vision leading to reduced peripheral performance relative to that of central vision (i.e., behavioral eccentricity effect) for many visual functions. The center-periphery organization extends to high-order visual cortex where, for example, the well-studied face-sensitive fusiform face area (FFA) shows sensitivity to central vision and the place-sensitive parahippocampal place area (PPA) shows sensitivity to peripheral vision. As we have recently found that face perception is more sensitive to eccentricity than place perception, here we examined whether these behavioral findings reflect differences in FFA's and PPA's sensitivities to eccentricity. We assumed FFA would show higher sensitivity to eccentricity than PPA would, but that both regions' modulation by eccentricity would be invariant to the viewed category. We parametrically investigated (fMRI, n = 32) how FFA's and PPA's activations are modulated by eccentricity (≤8°) and category (upright/inverted faces/houses) while keeping stimulus size constant. As expected, FFA showed an overall higher sensitivity to eccentricity than PPA. However, both regions' activation modulations by eccentricity were dependent on the viewed category. In FFA, a reduction of activation with growing eccentricity ("BOLD eccentricity effect") was found (with different amplitudes) for all categories. In PPA however, qualitatively different BOLD eccentricity effect modulations were found (e.g., at 8° mild BOLD eccentricity effect for houses but a reverse BOLD eccentricity effect for faces and no modulation for inverted faces). Our results emphasize that peripheral vision investigations are critical to further our understanding of visual processing.
Collapse
Affiliation(s)
- Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
| | - Sharon Gilaie‐Dotan
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
- UCL Institute of Cognitive NeuroscienceLondonUK
| |
Collapse
|
5
|
Akselevich V, Gilaie-Dotan S. Positive and negative facial valence perception are modulated differently by eccentricity in the parafovea. Sci Rep 2022; 12:21693. [PMID: 36522350 PMCID: PMC9755278 DOI: 10.1038/s41598-022-24919-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/22/2022] [Indexed: 12/16/2022] Open
Abstract
Understanding whether people around us are in a good, bad or neutral mood can be critical to our behavior, both when looking directly at them or when they are in our peripheral visual field. However, facial expressions of emotions are often investigated at central visual field or at locations right or left of fixation. Here we assumed that perception of facial emotional valence (the emotion's pleasantness) changes with distance from central visual field (eccentricity) and that different emotions may be influenced differently by eccentricity. Participants (n = 58) judged the valence of emotional faces across the parafovea (≤ 4°, positive (happy), negative (fearful), or neutral)) while their eyes were being tracked. As expected, performance decreased with eccentricity. Positive valence perception was least affected by eccentricity (accuracy reduction of 10-19% at 4°) and negative the most (accuracy reduction of 35-38% at 4°), and this was not a result of speed-accuracy trade-off or response biases. Within-valence (but not across-valence) performance was associated across eccentricities suggesting perception of different valences is supported by different mechanisms. While our results may not generalize to all positive and negative emotions, they indicate that beyond-foveal investigations can reveal additional characteristics of the mechanisms that underlie facial expression processing and perception.
Collapse
Affiliation(s)
- Vasilisa Akselevich
- grid.22098.310000 0004 1937 0503School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002 Ramat Gan, Israel ,grid.22098.310000 0004 1937 0503The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- grid.22098.310000 0004 1937 0503School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, 5290002 Ramat Gan, Israel ,grid.22098.310000 0004 1937 0503The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel ,grid.83440.3b0000000121901201UCL Institute of Cognitive Neuroscience, London, UK
| |
Collapse
|
6
|
Judah MR, Hager NM, Milam AL, Ramsey-Wilson G, Hamrick HC, Sutton TG. Out of Sight, Still in Mind: The Consequences of Nonfoveal Viewing of Emotional Faces in Social Anxiety. JOURNAL OF SOCIAL AND CLINICAL PSYCHOLOGY 2022. [DOI: 10.1521/jscp.2022.41.6.578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Background: Anxiety sensitivity social concerns (ASSC) is a risk factor for social anxiety disorder that may motivate avoidance of eye contact (i.e., gaze avoidance), thereby maintaining anxiety. Gaze avoidance displaces socially relevant stimuli (e.g., faces) from foveal (i.e., center) vision, possibly reducing visual sensation of faces and giving an opportunity to misperceive others as rejecting. Methods: We tested the effects of non-foveal viewing on perceiving faces as rejecting, whether there is an indirect effect of ASSC on state anxiety explained by perceived rejection, and whether the indirect effect depended on non-foveal viewing of faces. Participants (N = 118) viewed faces presented within foveal and non-foveal positions and rated how rejecting each face appeared to be, followed by ratings of their own state anxiety. Results: ASSC was associated with perceiving faces as rejecting regardless of face position. There was an indirect effect of ASSC on state anxiety ratings that was explained by perceived rejection, but only in the non-foveal positions. The indirect effect was due to an association between perceived rejection and state anxiety that was only present when faces were viewed in non-foveal vision. Discussion: The findings suggest ASSC may maintain state anxiety partially through the perceived rejection someone experiences while avoiding the gaze of others. This study supports cognitive theories of social anxiety and encourages cognitive-behavioral interventions for gaze avoidance in people with social anxiety disorder.
Collapse
Affiliation(s)
| | - Nathan M. Hager
- Old Dominion University, Norfolk; Virginia Consortium Program in Clinical Psychology, Norfolk
| | - Alicia L. Milam
- Old Dominion University, Norfolk; Virginia Consortium Program in Clinical Psychology, Norfolk
| | | | | | - Tiphanie G. Sutton
- Old Dominion University, Norfolk; Virginia Consortium Program in Clinical Psychology, Norfolk
| |
Collapse
|
7
|
Canas-Bajo T, Whitney D. Relative tuning of holistic face processing towards the fovea. Vision Res 2022; 197:108049. [PMID: 35461170 PMCID: PMC10101769 DOI: 10.1016/j.visres.2022.108049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 03/12/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Humans quickly detect and gaze at faces in the world, which reflects their importance in cognition and may lead to tuning of face recognition toward the central visual field. Although sometimes reported, foveal selectivity in face processing is debated: brain imaging studies have found evidence for a central field bias specific to faces, but behavioral studies have found little foveal selectivity in face recognition. These conflicting results are difficult to reconcile, but they could arise from stimulus-specific differences. Recent studies, for example, suggest that individual faces vary in the degree to which they require holistic processing. Holistic processing is the perception of faces as a whole rather than as a set of separate features. We hypothesized that the dissociation between behavioral and neuroimaging studies arises because of this stimulus-specific dependence on holistic processing. Specifically, the central bias found in neuroimaging studies may be specific to holistic processing. Here, we tested whether the eccentricity-dependence of face perception is determined by the degree to which faces require holistic processing. We first measured the holistic-ness of individual Mooney faces (two-tone shadow images readily perceived as faces). In a group of independent observers, we then used a gender discrimination task to measured recognition of these Mooney faces as a function of their eccentricity. Face gender was recognized across the visual field, even at substantial eccentricities, replicating prior work. Importantly, however, holistic face gender recognition was relatively tuned-slightly, but reliably stronger in the central visual field. Our results may reconcile the debate on the eccentricity-dependance of face perception and reveal a spatial inhomogeneity specifically in the holistic representations of faces.
Collapse
Affiliation(s)
- Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA.
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
8
|
Masarwa S, Kreichman O, Gilaie-Dotan S. Larger images are better remembered during naturalistic encoding. Proc Natl Acad Sci U S A 2022; 119:e2119614119. [PMID: 35046050 PMCID: PMC8794838 DOI: 10.1073/pnas.2119614119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 12/03/2021] [Indexed: 11/18/2022] Open
Abstract
We are constantly exposed to multiple visual scenes, and while freely viewing them without an intentional effort to memorize or encode them, only some are remembered. It has been suggested that image memory is influenced by multiple factors, such as depth of processing, familiarity, and visual category. However, this is typically investigated when people are instructed to perform a task (e.g., remember or make some judgment about the images), which may modulate processing at multiple levels and thus, may not generalize to naturalistic visual behavior. Visual memory is assumed to rely on high-level visual perception that shows a level of size invariance and therefore is not assumed to be highly dependent on image size. Here, we reasoned that during naturalistic vision, free of task-related modulations, bigger images stimulate more visual system processing resources (from retina to cortex) and would, therefore, be better remembered. In an extensive set of seven experiments, naïve participants (n = 182) were asked to freely view presented images (sized 3° to 24°) without any instructed encoding task. Afterward, they were given a surprise recognition test (midsized images, 50% already seen). Larger images were remembered better than smaller ones across all experiments (∼20% higher accuracy or ∼1.5 times better). Memory was proportional to image size, faces were better remembered, and outdoors the least. Results were robust even when controlling for image set, presentation order, screen resolution, image scaling at test, or the amount of information. While multiple factors affect image memory, our results suggest that low- to high-level processes may all contribute to image memory.
Collapse
Affiliation(s)
- Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel;
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| |
Collapse
|
9
|
Bornet A, Choung OH, Doerig A, Whitney D, Herzog MH, Manassi M. Global and high-level effects in crowding cannot be predicted by either high-dimensional pooling or target cueing. J Vis 2021; 21:10. [PMID: 34812839 PMCID: PMC8626847 DOI: 10.1167/jov.21.12.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 09/30/2021] [Indexed: 11/24/2022] Open
Abstract
In visual crowding, the perception of a target deteriorates in the presence of nearby flankers. Traditionally, target-flanker interactions have been considered as local, mostly deleterious, low-level, and feature specific, occurring when information is pooled along the visual processing hierarchy. Recently, a vast literature of high-level effects in crowding (grouping effects and face-holistic crowding in particular) led to a different understanding of crowding, as a global, complex, and multilevel phenomenon that cannot be captured or explained by simple pooling models. It was recently argued that these high-level effects may still be captured by more sophisticated pooling models, such as the Texture Tiling model (TTM). Unlike simple pooling models, the high-dimensional pooling stage of the TTM preserves rich information about a crowded stimulus and, in principle, this information may be sufficient to drive high-level and global aspects of crowding. In addition, it was proposed that grouping effects in crowding may be explained by post-perceptual target cueing. Here, we extensively tested the predictions of the TTM on the results of six different studies that highlighted high-level effects in crowding. Our results show that the TTM cannot explain any of these high-level effects, and that the behavior of the model is equivalent to a simple pooling model. In addition, we show that grouping effects in crowding cannot be predicted by post-perceptual factors, such as target cueing. Taken together, these results reinforce once more the idea that complex target-flanker interactions determine crowding and that crowding occurs at multiple levels of the visual hierarchy.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
| | - David Whitney
- Department of Psychology, University of California, Berkeley, California, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA
- Vision Science Group, University of California, Berkeley, California, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, King's College, Aberdeen, UK
| |
Collapse
|
10
|
Linear Integration of Sensory Evidence over Space and Time Underlies Face Categorization. J Neurosci 2021; 41:7876-7893. [PMID: 34326145 DOI: 10.1523/jneurosci.3055-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 07/08/2021] [Accepted: 07/21/2021] [Indexed: 11/21/2022] Open
Abstract
Visual object recognition relies on elaborate sensory processes that transform retinal inputs to object representations, but it also requires decision-making processes that read out object representations and function over prolonged time scales. The computational properties of these decision-making processes remain underexplored for object recognition. Here, we study these computations by developing a stochastic multifeature face categorization task. Using quantitative models and tight control of spatiotemporal visual information, we demonstrate that human subjects (five males, eight females) categorize faces through an integration process that first linearly adds the evidence conferred by task-relevant features over space to create aggregated momentary evidence and then linearly integrates it over time with minimum information loss. Discrimination of stimuli along different category boundaries (e.g., identity or expression of a face) is implemented by adjusting feature weights of spatial integration. This linear but flexible integration process over space and time bridges past studies on simple perceptual decisions to complex object recognition behavior.SIGNIFICANCE STATEMENT Although simple perceptual decision-making such as discrimination of random dot motion has been successfully explained as accumulation of sensory evidence, we lack rigorous experimental paradigms to study the mechanisms underlying complex perceptual decision-making such as discrimination of naturalistic faces. We develop a stochastic multifeature face categorization task as a systematic approach to quantify the properties and potential limitations of the decision-making processes during object recognition. We show that human face categorization could be modeled as a linear integration of sensory evidence over space and time. Our framework to study object recognition as a spatiotemporal integration process is broadly applicable to other object categories and bridges past studies of object recognition and perceptual decision-making.
Collapse
|
11
|
Wolf C, Lappe M. Vision as oculomotor reward: cognitive contributions to the dynamic control of saccadic eye movements. Cogn Neurodyn 2021; 15:547-568. [PMID: 34367360 PMCID: PMC8286912 DOI: 10.1007/s11571-020-09661-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/12/2020] [Accepted: 12/28/2020] [Indexed: 01/08/2023] Open
Abstract
Humans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets' luminance but also crucially on high-level factors like the expected reward or a targets' relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.
Collapse
Affiliation(s)
- Christian Wolf
- Institute for Psychology, University of Muenster, Fliednerstrasse 21, 48149 Münster, Germany
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Fliednerstrasse 21, 48149 Münster, Germany
| |
Collapse
|