1
|
Zeki S, Hale ZF, Beyh A, Rasche SE. Perceptual axioms are irreconcilable with Euclidean geometry. Eur J Neurosci 2024. [PMID: 38803020 DOI: 10.1111/ejn.16430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 04/12/2024] [Accepted: 05/15/2024] [Indexed: 05/29/2024]
Abstract
There are different definitions of axioms, but the one that seems to have general approval is that axioms are statements whose truths are universally accepted but cannot be proven; they are the foundation from which further propositional truths are derived. Previous attempts, led by David Hilbert, to show that all of mathematics can be built into an axiomatic system that is complete and consistent failed when Kurt Gödel proved that there will always be statements which are known to be true but can never be proven within the same axiomatic system. But Gödel and his followers took no account of brain mechanisms that generate and mediate logic. In this largely theoretical paper, but backed by previous experiments and our new ones reported below, we show that in the case of so-called 'optical illusions', there exists a significant and irreconcilable difference between their visual perception and their description according to Euclidean geometry; when participants are asked to adjust, from an initial randomised state, the perceptual geometric axioms to conform to the Euclidean description, the two never match, although the degree of mismatch varies between individuals. These results provide evidence that perceptual axioms, or statements known to be perceptually true, cannot be described mathematically. Thus, the logic of the visual perceptual system is irreconcilable with the cognitive (mathematical) system and cannot be updated even when knowledge of the difference between the two is available. Hence, no one brain reality is more 'objective' than any other.
Collapse
Affiliation(s)
- Semir Zeki
- Laboratory of Neurobiology, University College London, London, UK
| | - Zachary F Hale
- Laboratory of Neurobiology, University College London, London, UK
| | - Ahmad Beyh
- Laboratory of Neurobiology, University College London, London, UK
| | - Samuel E Rasche
- Laboratory of Neurobiology, University College London, London, UK
| |
Collapse
|
2
|
Xu H, Zhou J, Shen M. Hierarchical Constraints on the Distribution of Attention in Dynamic Displays. Behav Sci (Basel) 2024; 14:401. [PMID: 38785892 PMCID: PMC11117499 DOI: 10.3390/bs14050401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 04/23/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024] Open
Abstract
Human vision is remarkably good at recovering the latent hierarchical structure of dynamic scenes. Here, we explore how visual attention operates with this hierarchical motion representation. The way in which attention responds to surface physical features has been extensively explored. However, we know little about how the distribution of attention can be distorted by the latent hierarchical structure. To explore this topic, we conducted two experiments to investigate the relationship between minimal graph distance (MGD), one key factor in hierarchical representation, and attentional distribution. In Experiment 1, we constructed three hierarchical structures consisting of two moving objects with different MGDs. In Experiment 2, we generated three moving objects from one hierarchy to eliminate the influence of different structures. Attention was probed by the classic congruent-incongruent cueing paradigm. Our results show that the cueing effect is significantly smaller when the MGD between two objects is shorter, which suggests that attention is not evenly distributed across multiple moving objects but distorted by their latent hierarchical structure. As neither the latent structure nor the graph distance was part of the explicit task, our results also imply that both the construction of hierarchical representation and the attention to that representation are spontaneous and automatic.
Collapse
Affiliation(s)
- Haokui Xu
- Department of Psychology and Behavior Sciences, Zhejiang University, Hangzhou 310023, China;
| | | | - Mowei Shen
- Department of Psychology and Behavior Sciences, Zhejiang University, Hangzhou 310023, China;
| |
Collapse
|
3
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.18.567582. [PMID: 38014023 PMCID: PMC10680834 DOI: 10.1101/2023.11.18.567582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, NY 10027, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
4
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
5
|
A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data. eNeuro 2023; 10:ENEURO.0144-22.2022. [PMID: 36316119 PMCID: PMC9833051 DOI: 10.1523/eneuro.0144-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/14/2022] [Accepted: 10/24/2022] [Indexed: 01/14/2023] Open
Abstract
A central question in neuroscience is how sensory inputs are transformed into percepts. At this point, it is clear that this process is strongly influenced by prior knowledge of the sensory environment. Bayesian ideal observer models provide a useful link between data and theory that can help researchers evaluate how prior knowledge is represented and integrated with incoming sensory information. However, the statistical prior employed by a Bayesian observer cannot be measured directly, and must instead be inferred from behavioral measurements. Here, we review the general problem of inferring priors from psychophysical data, and the simple solution that follows from assuming a prior that is a Gaussian probability distribution. As our understanding of sensory processing advances, however, there is an increasing need for methods to flexibly recover the shape of Bayesian priors that are not well approximated by elementary functions. To address this issue, we describe a novel approach that applies to arbitrary prior shapes, which we parameterize using mixtures of Gaussian distributions. After incorporating a simple approximation, this method produces an analytical solution for psychophysical quantities that can be numerically optimized to recover the shapes of Bayesian priors. This approach offers advantages in flexibility, while still providing an analytical framework for many scenarios. We provide a MATLAB toolbox implementing key computations described herein.
Collapse
|
6
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA. .,Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA.,Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
7
|
Abstract
Vision and learning have long been considered to be two areas of research linked only distantly. However, recent developments in vision research have changed the conceptual definition of vision from a signal-evaluating process to a goal-oriented interpreting process, and this shift binds learning, together with the resulting internal representations, intimately to vision. In this review, we consider various types of learning (perceptual, statistical, and rule/abstract) associated with vision in the past decades and argue that they represent differently specialized versions of the fundamental learning process, which must be captured in its entirety when applied to complex visual processes. We show why the generalized version of statistical learning can provide the appropriate setup for such a unified treatment of learning in vision, what computational framework best accommodates this kind of statistical learning, and what plausible neural scheme could feasibly implement this framework. Finally, we list the challenges that the field of statistical learning faces in fulfilling the promise of being the right vehicle for advancing our understanding of vision in its entirety. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- József Fiser
- Department of Cognitive Science, Center for Cognitive Computation, Central European University, Vienna 1100, Austria;
| | - Gábor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| |
Collapse
|
8
|
Saini H, Jordan H, Fallah M. Color Modulates Feature Integration. Front Psychol 2021; 12:680558. [PMID: 34177733 PMCID: PMC8226161 DOI: 10.3389/fpsyg.2021.680558] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 05/19/2021] [Indexed: 11/21/2022] Open
Abstract
Bayesian models of object recognition propose the resolution of ambiguity through probabilistic integration of prior experience with available sensory information. Color, even when task-irrelevant, has been shown to modulate high-level cognitive control tasks. However, it remains unclear how color modulations affect lower-level perceptual processing. We investigated whether color affects feature integration using the flash-jump illusion. This illusion occurs when an apparent motion stimulus, a rectangular bar appearing at different locations along a motion trajectory, changes color at a single position. Observers misperceive this color change as occurring farther along the trajectory of motion. This mislocalization error is proposed to be produced by a Bayesian perceptual framework dependent on responses in area V4. Our results demonstrated that the color of the flash modulated the magnitude of the flash-jump illusion such that participants reported less of a shift, i.e., a more veridical flash location, for both red and blue flashes, as compared to green and yellow. Our findings extend color-dependent modulation effects found in higher-order executive functions into lower-level Bayesian perceptual processes. Our results also support the theory that feature integration is a Bayesian process. In this framework, color modulations play an inherent and automatic role as different colors have different weights in Bayesian perceptual processing.
Collapse
Affiliation(s)
- Harpreet Saini
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
| | - Heather Jordan
- Centre for Vision Research, York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Mazyar Fallah
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, ON, Canada
| |
Collapse
|