1
|
Lin Y, Hsu YY, Cheng T, Hsiung PC, Wu CW, Hsieh PJ. Neural representations of perspectival shapes and attentional effects: Evidence from fMRI and MEG. Cortex 2024; 176:129-143. [PMID: 38781910 DOI: 10.1016/j.cortex.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/14/2024] [Accepted: 04/05/2024] [Indexed: 05/25/2024]
Abstract
Does the human brain represent perspectival shapes, i.e., viewpoint-dependent object shapes, especially in relatively higher-level visual areas such as the lateral occipital cortex? What is the temporal profile of the appearance and disappearance of neural representations of perspectival shapes? And how does attention influence these neural representations? To answer these questions, we employed functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and multivariate decoding techniques to investigate spatiotemporal neural representations of perspectival shapes. Participants viewed rotated objects along with the corresponding objective shapes and perspectival shapes (i.e., rotated round, round, and oval) while we measured their brain activities. Our results revealed that shape classifiers trained on the basic shapes (i.e., round and oval) consistently identified neural representations in the lateral occipital cortex corresponding to the perspectival shapes of the viewed objects regardless of attentional manipulations. Additionally, this classification tendency toward the perspectival shapes emerged approximately 200 ms after stimulus presentation. Moreover, attention influenced the spatial dimension as the regions showing the perspectival shape classification tendency propagated from the occipital lobe to the temporal lobe. As for the temporal dimension, attention led to a more robust and enduring classification tendency towards perspectival shapes. In summary, our study outlines a spatiotemporal neural profile for perspectival shapes that suggests a greater degree of perspectival representation than is often acknowledged.
Collapse
Affiliation(s)
- Yi Lin
- Taiwan International Graduate Program in Interdisciplinary Neuroscience, National Cheng Kung University and Academia Sinica, Nankan, Taipei, Taiwan; Research Unit Brain and Cognition, KU Leuven, Leuven, Belgium.
| | - Yung-Yi Hsu
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Tony Cheng
- Waseda Institute for Advanced Study, Waseda University, Tokyo, Japan
| | - Pin-Cheng Hsiung
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Chen-Wei Wu
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
| | - Po-Jang Hsieh
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan.
| |
Collapse
|
2
|
Stewart EEM, Fleming RW, Schütz AC. A simple optical flow model explains why certain object viewpoints are special. Proc Biol Sci 2024; 291:20240577. [PMID: 38981528 PMCID: PMC11334996 DOI: 10.1098/rspb.2024.0577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 07/11/2024] Open
Abstract
A core challenge in perception is recognizing objects across the highly variable retinal input that occurs when objects are viewed from different directions (e.g. front versus side views). It has long been known that certain views are of particular importance, but it remains unclear why. We reasoned that characterizing the computations underlying visual comparisons between objects could explain the privileged status of certain qualitatively special views. We measured pose discrimination for a wide range of objects, finding large variations in performance depending on the object and the viewing angle, with front and back views yielding particularly good discrimination. Strikingly, a simple and biologically plausible computational model based on measuring the projected three-dimensional optical flow between views of objects accurately predicted both successes and failures of discrimination performance. This provides a computational account of why certain views have a privileged status.
Collapse
Affiliation(s)
- Emma E. M. Stewart
- School of Biological and Behavioural Sciences, Queen Mary University London, LondonE14NS, UK
- Department of Experimental and Biological Psychology, Queen Mary University London, LondonE14NS, UK
- Centre for Brain and Behaviour, Queen Mary University London, LondonE14NS, UK
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen35394, Germany
- Centre for Mind, Brain, and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen35032, Germany
| | - Alexander C. Schütz
- Centre for Mind, Brain, and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen35032, Germany
- General and Experimental Psychology, University of Marburg, Marburg35032, Germany
| |
Collapse
|
3
|
Hertzmann A. Toward a theory of perspective perception in pictures. J Vis 2024; 24:23. [PMID: 38662346 PMCID: PMC11055503 DOI: 10.1167/jov.24.4.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 02/05/2024] [Indexed: 04/26/2024] Open
Abstract
This paper reviews projection models and their perception in realistic pictures, and proposes hypotheses for three-dimensional (3D) shape and space perception in pictures. In these hypotheses, eye fixations, and foveal vision play a central role. Many past theories and experimental studies focus solely on linear perspective. Yet, these theories fail to explain many important perceptual phenomena, including the effectiveness of nonlinear projections. Indeed, few classical paintings strictly obey linear perspective, nor do the best distortion-avoidance techniques for wide-angle computational photography. The hypotheses here employ a two-stage model for 3D human vision. When viewing a picture, the first stage perceives 3D shape for the current gaze. Each fixation has its own perspective projection, but, owing to the nature of foveal and peripheral vision, shape information is obtained primarily for a small region of the picture around the fixation. As a viewer moves their eyes, the second stage continually integrates some of the per-gaze information into an overall interpretation of a picture. The interpretation need not be geometrically stable or consistent over time. It is argued that this framework could explain many disparate pictorial phenomena, including different projection styles throughout art history and computational photography, while being consistent with the constraints of human 3D vision. The paper reviews open questions and suggests new studies to explore these hypotheses.
Collapse
Affiliation(s)
- Aaron Hertzmann
- Adobe Research, San Francisco, CA, USA
- https://www.dgp.toronto.edu/~hertzman
| |
Collapse
|
4
|
Gayet S, Battistoni E, Thorat S, Peelen MV. Searching near and far: The attentional template incorporates viewing distance. J Exp Psychol Hum Percept Perform 2024; 50:216-231. [PMID: 38376937 PMCID: PMC7616437 DOI: 10.1037/xhp0001172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
According to theories of visual search, observers generate a visual representation of the search target (the "attentional template") that guides spatial attention toward target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is 10 times smaller when it is parked 50 compared to 5 m away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were precued to search for a car or person in the near or far plane of an outdoor scene. In "search trials," the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed "catch-trials," two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Experiments 1 and 2) and orientation (Experiment 3) of probe stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Experiment 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Surya Gayet
- Experimental Psychology, Helmholtz Institute, Utrecht University
| | | | - Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
5
|
Yoo SA, Lee S, Joo SJ. Monocular cues are superior to binocular cues for size perception when they are in conflict in virtual reality. Cortex 2023; 166:80-90. [PMID: 37343313 DOI: 10.1016/j.cortex.2023.05.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 03/16/2023] [Accepted: 05/22/2023] [Indexed: 06/23/2023]
Abstract
Three-dimensional (3D) depth information is important to estimate object sizes. The visual system extracts 3D depth information using both binocular cues and monocular cues. However, how these different depth signals interact with each other to compute the object size in 3D space is unclear. Here, we aim to study the relative contribution of monocular and binocular depth information to size perception in a modified Ponzo context by manipulating their relations in a virtual reality environment. Specifically, we compared the amount of the size illusion in the following two conditions, in which monocular cues and binocular disparity in the Ponzo context can indicate the same depth sign (congruent) or opposite depth sign (incongruent). Our results show an increase in the amount of the Ponzo illusion in the congruent condition. In contrast, in the incongruent condition, we find that the two cues indicating the opposite depth signs do not cancel out the Ponzo illusion, suggesting that the effects of the two cues are not equal. Rather, binocular disparity information seems to be suppressed and the size judgment is mainly dependent on the monocular depth information when the two cues are in conflict. Our results suggest that monocular and binocular depth signals are fused for size perception only when they both indicate the same depth sign and top-down 3D depth information based on monocular cues contributes more to size perception than binocular disparity when they are in conflict in virtual reality.
Collapse
Affiliation(s)
- Sang-Ah Yoo
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Suhyun Lee
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Sung Jun Joo
- Department of Psychology, Pusan National University, Busan, Republic of Korea.
| |
Collapse
|
6
|
Burge J, Burge T. Shape, perspective, and what is and is not perceived: Comment on Morales, Bax, and Firestone (2020). Psychol Rev 2023; 130:1125-1136. [PMID: 35549319 PMCID: PMC11366222 DOI: 10.1037/rev0000363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Psychology and philosophy have long reflected on the role of perspective in vision. Since the dawn of modern vision science-roughly, since Helmholtz in the late 1800s-scientific explanations in vision have focused on understanding the computations that transform the sensed retinal image into percepts of the three-dimensional environment. The standard view in the science is that distal properties-viewpoint-independent properties of the environment (object shape) and viewpoint-dependent relational properties (3D orientation relative to the viewer)-are perceptually represented and that properties of the proximal stimulus (in vision, the retinal image) are not. This view is woven into the nature of scientific explanation in perceptual psychology, and has guided impressive advances over the past 150 years. A recently published article suggests that in shape perception, the standard view must be revised. It argues, on the basis of new empirical data, that a new entity-perspectival shape-should be introduced into scientific explanations of shape perception. Specifically, the article's centrally advertised claim is that, in addition to distal shape, perspectival shape is perceived. We argue that this claim rests on a series of mistakes. Problems in experimental design entail that the article provides no empirical support for any claims regarding either perspective or the perception of shape. There are further problems in scientific reasoning and conceptual development. Detailing these criticisms and explaining how science treats these issues are meant to clarify method and theory, and to improve exchanges between the science and philosophy of perception. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Johannes Burge
- Department of Psychology, University of Pennsylvania
- Neuroscience Graduate Group, University of Pennsylvania
- Bioengineering Graduate Group, University of Pennsylvania
| | - Tyler Burge
- Department of Philosophy, University of California, Los Angeles
| |
Collapse
|
7
|
Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F. New Approaches to 3D Vision. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210443. [PMID: 36511413 PMCID: PMC9745878 DOI: 10.1098/rstb.2021.0443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 12/15/2022] Open
Abstract
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Michael J. Morgan
- Department of Optometry and Visual Sciences, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne & Wear NE2 4HH, UK
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| | | | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912-9067, USA
| |
Collapse
|
8
|
Linton P. Minimal theory of 3D vision: new approach to visual scale and visual shape. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210455. [PMID: 36511406 PMCID: PMC9745885 DOI: 10.1098/rstb.2021.0455] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 07/20/2022] [Indexed: 12/15/2022] Open
Abstract
Since Kepler and Descartes in the early-1600s, vision science has been committed to a triangulation model of stereo vision. But in the early-1800s, we realized that disparities are responsible for stereo vision. And we have spent the past 200 years trying to shoe-horn disparities back into the triangulation account. The first part of this article argues that this is a mistake, and that stereo vision is a solution to a different problem: the eradication of rivalry between the two retinal images, rather than the triangulation of objects in space. This leads to a 'minimal theory of 3D vision', where 3D vision is no longer tied to estimating the scale, shape, and direction of objects in the world. The second part of this article then asks whether the other aspects of 3D vision, which go beyond stereo vision, really operate at the same level of visual experience as stereo vision? I argue they do not. Whilst we want a theory of real-world 3D vision, the literature risks giving us a theory of picture perception instead. And I argue for a two-stage theory, where our purely internal 'minimal' 3D percept (from stereo vision) is linked to the world through cognition. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| |
Collapse
|
9
|
Visual cognition: A new perspective on mental rotation. Curr Biol 2022; 32:R1281-R1283. [PMID: 36413974 DOI: 10.1016/j.cub.2022.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Manipulating an object in one's mind has long been thought to mirror physically manipulating that object in allocentric three-dimensional space. A new study revises and clarifies this foundational assumption, identifying a previously unknown role for the observer's point-of-view.
Collapse
|
10
|
Stewart EEM, Hartmann FT, Morgenstern Y, Storrs KR, Maiello G, Fleming RW. Mental object rotation based on two-dimensional visual representations. Curr Biol 2022; 32:R1224-R1225. [PMID: 36347228 DOI: 10.1016/j.cub.2022.09.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The discovery of mental rotation was one of the most significant landmarks in experimental psychology, leading to the ongoing assumption that to visually compare objects from different three-dimensional viewpoints, we use explicit internal simulations of object rotations, to 'mentally adjust' one object until it matches the other1. These rotations are thought to be performed on three-dimensional representations of the object, by literal analogy to physical rotations. In particular, it is thought that an imagined object is continuously adjusted at a constant three-dimensional angular rotation rate from its initial orientation to the final orientation through all intervening viewpoints2. While qualitative theories have tried to account for this phenomenon3, to date there has been no explicit, image-computable model of the underlying processes. As a result, there is no quantitative account of why some object viewpoints appear more similar to one another than others when the three-dimensional angular difference between them is the same4,5. We reasoned that the specific pattern of non-uniformities in the perception of viewpoints can reveal the visual computations underlying mental rotation. We therefore compared human viewpoint perception with a model based on the kind of two-dimensional 'optical flow' computations that are thought to underlie motion perception in biological vision6, finding that the model reproduces the specific errors that participants make. This suggests that mental rotation involves simulating the two-dimensional retinal image change that would occur when rotating objects. When we compare objects, we do not do so in a distal three-dimensional representation as previously assumed, but by measuring how much the proximal stimulus would change if we watched the object rotate, capturing perspectival appearance changes7.
Collapse
Affiliation(s)
- Emma E M Stewart
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany.
| | - Frieder T Hartmann
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany
| | - Yaniv Morgenstern
- University of Leuven (KU Leuven), Tiensestraat 102 - box 3711, 3000 Leuven, Belgium
| | - Katherine R Storrs
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany; Centre for Mind, Brain and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10 F, D-35394 Giessen, Germany; Centre for Mind, Brain and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
11
|
Berger SE, Baria AT. Assessing Pain Research: A Narrative Review of Emerging Pain Methods, Their Technosocial Implications, and Opportunities for Multidisciplinary Approaches. FRONTIERS IN PAIN RESEARCH 2022; 3:896276. [PMID: 35721658 PMCID: PMC9201034 DOI: 10.3389/fpain.2022.896276] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
Pain research traverses many disciplines and methodologies. Yet, despite our understanding and field-wide acceptance of the multifactorial essence of pain as a sensory perception, emotional experience, and biopsychosocial condition, pain scientists and practitioners often remain siloed within their domain expertise and associated techniques. The context in which the field finds itself today-with increasing reliance on digital technologies, an on-going pandemic, and continued disparities in pain care-requires new collaborations and different approaches to measuring pain. Here, we review the state-of-the-art in human pain research, summarizing emerging practices and cutting-edge techniques across multiple methods and technologies. For each, we outline foreseeable technosocial considerations, reflecting on implications for standards of care, pain management, research, and societal impact. Through overviewing alternative data sources and varied ways of measuring pain and by reflecting on the concerns, limitations, and challenges facing the field, we hope to create critical dialogues, inspire more collaborations, and foster new ideas for future pain research methods.
Collapse
Affiliation(s)
- Sara E. Berger
- Responsible and Inclusive Technologies Research, Exploratory Sciences Division, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, United States
| | | |
Collapse
|
12
|
Gayet S, Peelen MV. Preparatory attention incorporates contextual expectations. Curr Biol 2021; 32:687-692.e6. [PMID: 34919809 DOI: 10.1016/j.cub.2021.11.062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 10/18/2021] [Accepted: 11/25/2021] [Indexed: 10/19/2022]
Abstract
Humans are remarkably proficient at finding objects within complex visual scenes. According to current theories of attention,1-3 visual processing of an object of interest is favored through the preparatory activation of object-specific representations in visual cortex.4-15 One key problem that is inherent to real-world visual search but is not accounted for by current theories is that a given object will produce a dramatically different retinal image depending on its location, which is unknown in advance. For instance, the color of the retinal image depends on the illumination on the object, its shape depends on the viewpoint, and (most critically) its size can vary by several orders of magnitude, depending on the distance to the observer. In order to benefit search, preparatory activity thus needs to incorporate contextual expectations. In the current study, we measured fMRI blood-oxygen-level-dependent (BOLD) activity in human observers while they prepared to search for objects at different distances in indoor-scene photographs. First, we established that observers instantiated preparatory object representations: activity patterns in object-selective cortex evoked during search preparation (while no objects were presented) resembled activity patterns evoked by viewing those objects in isolation. Second, we demonstrated that these preparatory object representations were systematically modulated by expectations derived from scene context: activity patterns reflected the predicted retinal image of the object at each distance (i.e., distant search evoking smaller object representations and nearby search evoking larger object representations). These findings reconcile current theories of attentional selection with the challenges of real-world vision.
Collapse
Affiliation(s)
- Surya Gayet
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD Nijmegen, the Netherlands; Helmholtz Institute, Experimental Psychology, Utrecht University, 3584 CS Utrecht, the Netherlands.
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD Nijmegen, the Netherlands
| |
Collapse
|
13
|
Linton P. V1 as an egocentric cognitive map. Neurosci Conscious 2021; 2021:niab017. [PMID: 34532068 PMCID: PMC8439394 DOI: 10.1093/nc/niab017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/21/2021] [Accepted: 06/08/2021] [Indexed: 01/20/2023] Open
Abstract
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
Collapse
Affiliation(s)
- Paul Linton
- Centre for Applied Vision Research, City, University of London, Northampton Square, London EC1V 0HB, UK
| |
Collapse
|
14
|
|
15
|
Conflicting shape percepts explained by perception cognition distinction. Proc Natl Acad Sci U S A 2021; 118:2024195118. [PMID: 33622670 DOI: 10.1073/pnas.2024195118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
16
|
Daoust L. Stability by degrees: conceptions of constancy from the history of perceptual psychology. HISTORY AND PHILOSOPHY OF THE LIFE SCIENCES 2021; 43:17. [PMID: 33564953 DOI: 10.1007/s40656-021-00370-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/17/2021] [Indexed: 06/12/2023]
Abstract
Do the physical facts of the viewed environment account for the ordinary experiences we have of that environment? According to standard philosophical views, distal facts do account for our experiences, a phenomenon explained by appeal to perceptual constancy, the phenomenal stability of objects and environmental properties notwithstanding physical changes in proximal stimulation. This essay reviews a significant but neglected research tradition in experimental psychology according to which percepts systematically do not correspond to mind-independent distal facts. Instead, stability of percept values comes in degrees, and physical facts about the viewed environment alone do not account for our ordinary experiences of the world. I conclude that more attention to descriptive research in psychophysics is warranted if what is sought is a philosophical theory of the nature of our perceptual relation with the world.
Collapse
|
17
|
Abstract
Arguably the most foundational principle in perception research is that our experience of the world goes beyond the retinal image; we perceive the distal environment itself, not the proximal stimulation it causes. Shape may be the paradigm case of such "unconscious inference": When a coin is rotated in depth, we infer the circular object it truly is, discarding the perspectival ellipse projected on our eyes. But is this really the fate of such perspectival shapes? Or does a tilted coin retain an elliptical appearance even when we know it's circular? This question has generated heated debate from Locke and Hume to the present; but whereas extant arguments rely primarily on introspection, this problem is also open to empirical test. If tilted coins bear a representational similarity to elliptical objects, then a circular coin should, when rotated, impair search for a distal ellipse. Here, nine experiments demonstrate that this is so, suggesting that perspectival shapes persist in the mind far longer than traditionally assumed. Subjects saw search arrays of three-dimensional "coins," and simply had to locate a distally elliptical coin. Surprisingly, rotated circular coins slowed search for elliptical targets, even when subjects clearly knew the rotated coins were circular. This pattern arose with static and dynamic cues, couldn't be explained by strategic responding or unfamiliarity, generalized across shape classes, and occurred even with sustained viewing. Finally, these effects extended beyond artificial displays to real-world objects viewed in naturalistic, full-cue conditions. We conclude that objects have a remarkably persistent dual character: their objective shape "out there," and their perspectival shape "from here."
Collapse
|