1
|
Wang X, Song Y, Liao M, Liu T, Liu L, Reynaud A. Corrective mechanisms of motion extrapolation. J Vis 2024; 24:6. [PMID: 38512248 PMCID: PMC10960225 DOI: 10.1167/jov.24.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 02/01/2024] [Indexed: 03/22/2024] Open
Abstract
Transmission and processing of sensory information in the visual system takes time. For motion perception, our brain can overcome this intrinsic neural delay through extrapolation mechanisms and accurately predict the current position of a continuously moving object. But how does the system behave when the motion abruptly changes and the prediction becomes wrong? Here we address this question by studying the perceived position of a moving object with various abrupt motion changes by human observers. We developed a task in which a bar is monotonously moving horizontally, and then motion suddenly stops, reverses, or disappears-then-reverses around two vertical stationary reference lines. Our results showed that participants overestimated the position of the stopping bar but did not perceive an overshoot in the motion reversal condition. When a temporal gap was added at the reverse point, the perceptual overshoot of the end point scaled with the gap durations. Our model suggests that the overestimation of the object position when it disappears is not linear as a function of its speeds but gradually fades out. These results can thus be reconciled in a single process where there is an interplay of the cortical motion prediction mechanisms and the late sensory transient visual inputs.
Collapse
Affiliation(s)
- Xi Wang
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec, Canada
| | - Yutong Song
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Meng Liao
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Tong Liu
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Longqian Liu
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Alexandre Reynaud
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Jiang LP, Rao RPN. Dynamic predictive coding: A model of hierarchical sequence learning and prediction in the neocortex. PLoS Comput Biol 2024; 20:e1011801. [PMID: 38330098 PMCID: PMC10880975 DOI: 10.1371/journal.pcbi.1011801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 02/21/2024] [Accepted: 01/04/2024] [Indexed: 02/10/2024] Open
Abstract
We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The model assumes that higher cortical levels modulate the temporal dynamics of lower levels, correcting their predictions of dynamics using prediction errors. As a result, lower levels form representations that encode sequences at shorter timescales (e.g., a single step) while higher levels form representations that encode sequences at longer timescales (e.g., an entire sequence). We tested this model using a two-level neural network, where the top-down modulation creates low-dimensional combinations of a set of learned temporal dynamics to explain input sequences. When trained on natural videos, the lower-level model neurons developed space-time receptive fields similar to those of simple cells in the primary visual cortex while the higher-level responses spanned longer timescales, mimicking temporal response hierarchies in the cortex. Additionally, the network's hierarchical sequence representation exhibited both predictive and postdictive effects resembling those observed in visual motion processing in humans (e.g., in the flash-lag illusion). When coupled with an associative memory emulating the role of the hippocampus, the model allowed episodic memories to be stored and retrieved, supporting cue-triggered recall of an input sequence similar to activity recall in the visual cortex. When extended to three hierarchical levels, the model learned progressively more abstract temporal representations along the hierarchy. Taken together, our results suggest that cortical processing and learning of sequences can be interpreted as dynamic predictive coding based on a hierarchical spatiotemporal generative model of the visual world.
Collapse
Affiliation(s)
- Linxing Preston Jiang
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| | - Rajesh P. N. Rao
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
3
|
Turner W, Sexton C, Hogendoorn H. Neural mechanisms of visual motion extrapolation. Neurosci Biobehav Rev 2024; 156:105484. [PMID: 38036162 DOI: 10.1016/j.neubiorev.2023.105484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/21/2023] [Accepted: 11/25/2023] [Indexed: 12/02/2023]
Abstract
Because neural processing takes time, the brain only has delayed access to sensory information. When localising moving objects this is problematic, as an object will have moved on by the time its position has been determined. Here, we consider predictive motion extrapolation as a fundamental delay-compensation strategy. From a population-coding perspective, we outline how extrapolation can be achieved by a forwards shift in the population-level activity distribution. We identify general mechanisms underlying such shifts, involving various asymmetries which facilitate the targeted 'enhancement' and/or 'dampening' of population-level activity. We classify these on the basis of their potential implementation (intra- vs inter-regional processes) and consider specific examples in different visual regions. We consider how motion extrapolation can be achieved during inter-regional signaling, and how asymmetric connectivity patterns which support extrapolation can emerge spontaneously from local synaptic learning rules. Finally, we consider how more abstract 'model-based' predictive strategies might be implemented. Overall, we present an integrative framework for understanding how the brain determines the real-time position of moving objects, despite neural delays.
Collapse
Affiliation(s)
- William Turner
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia.
| | | | - Hinze Hogendoorn
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia
| |
Collapse
|
4
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
5
|
Qian N, Goldberg ME, Zhang M. Tuning curves vs. population responses, and perceptual consequences of receptive-field remapping. Front Comput Neurosci 2023; 16:1060757. [PMID: 36714528 PMCID: PMC9880053 DOI: 10.3389/fncom.2022.1060757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 12/21/2022] [Indexed: 01/15/2023] Open
Abstract
Sensory processing is often studied by examining how a given neuron responds to a parameterized set of stimuli (tuning curve) or how a given stimulus evokes responses from a parameterized set of neurons (population response). Although tuning curves and the corresponding population responses contain the same information, they can have different properties. These differences are known to be important because the perception of a stimulus should be decoded from its population response, not from any single tuning curve. The differences are less studied in the spatial domain where a cell's spatial tuning curve is simply its receptive field (RF) profile. Here, we focus on evaluating the common belief that perrisaccadic forward and convergent RF shifts lead to forward (translational) and convergent (compressive) perceptual mislocalization, respectively, and investigate the effects of three related factors: decoders' awareness of RF shifts, changes of cells' covering density near attentional locus (the saccade target), and attentional response modulation. We find that RF shifts alone produce either no shift or an opposite shift of the population responses depending on whether or not decoders are aware of the RF shifts. Thus, forward RF shifts do not predict forward mislocalization. However, convergent RF shifts change cells' covering density for aware decoders (but not for unaware decoders) which may predict convergent mislocalization. Finally, attentional modulation adds a convergent component to population responses for stimuli near the target. We simulate the combined effects of these factors and discuss the results with extant mislocalization data. We speculate that perisaccadic mislocalization might be the flash-lag effect unrelated to perisaccadic RF remapping but to resolve the issue, one has to address the question of whether or not perceptual decoders are aware of RF shifts.
Collapse
Affiliation(s)
- Ning Qian
- Department of Neuroscience and Zuckerman Institute, Columbia University, New York, NY, United States
- Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, United States
| | - Michael E. Goldberg
- Department of Neuroscience and Zuckerman Institute, Columbia University, New York, NY, United States
- Departments of Neurology, Psychiatry, and Ophthalmology, Columbia University, New York, NY, United States
| | - Mingsha Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
6
|
Jovanovic L, Trichanh M, Martin B, Giersch A. Strong perceptual consequences of low-level visual predictions: A new illusion. Cognition 2023; 230:105279. [PMID: 36088670 DOI: 10.1016/j.cognition.2022.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 10/14/2022]
Abstract
Predicting information is considered to be an efficient strategy to minimise processing costs by exploiting regularities in the environment, and to allow for adaptation in case of irregularities, i.e. prediction errors. How such errors impact conscious perception is unclear, especially when predictions concern elementary visual features. Here we present results from a novel experimental approach allowing us to investigate the perceptual consequences of violated low-level predictions about moving objects. Observers were presented with two squares moving towards each other with a constant speed, and reported whether they were in contact or not before they disappeared. A compelling illusion of a gap between the squares occurred when the leading edges of those squares contacted briefly. The apparent gap was larger than a physical and stable separation of 2.6 min of arc between the squares. The illusion disappeared only when the contact did not violate extrapolations of the contrast edge between the moving object and the background. The pattern of results is consistent with an early locus of the effect and cannot be explained by decisional biases, guesses, top-down, attentional or masking effects. We suggest that violations of the contrast edge extrapolation in the direction of motion have strong perceptual consequences.
Collapse
Affiliation(s)
- Ljubica Jovanovic
- University of Strasbourg, INSERM U1114, 1 place de l'hôpital, 67100 Strasbourg, France; School of Psychology, University of Nottingham, NG7 2RD Nottingham, UK.
| | - Mélanie Trichanh
- Centre Ressource de Réhabilitation psychosociale et de remédiation cognitive, Hôpital du Vinatier Centre référent Lyonnais en Réhabilitation et en Remédiation cognitive (CL3R), UMR 5229 (CNRS), France
| | - Brice Martin
- Centre Ressource de Réhabilitation psychosociale et de remédiation cognitive, Hôpital du Vinatier Centre référent Lyonnais en Réhabilitation et en Remédiation cognitive (CL3R), UMR 5229 (CNRS), France
| | - Anne Giersch
- University of Strasbourg, INSERM U1114, 1 place de l'hôpital, 67100 Strasbourg, France; University Hospital of Strasbourg, Centre for Psychiatry, 67100 Strasbourg, France
| |
Collapse
|
7
|
Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:brainsci13010068. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
|
8
|
Wang W, Lei X, Gong W, Liang K, Chen L. Facilitation and inhibition effects of anodal and cathodal tDCS over areas MT+ on the flash-lag effect. J Neurophysiol 2022; 128:239-248. [PMID: 35766444 DOI: 10.1152/jn.00091.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The perceived position of a moving object in vision entails an accumulation of neural signals over space and time. Due to neural signal transmission delays, the visual system can not acquire immediate information about the moving object's position. Although physiological and psychophysical studies on the flash-lag effect (FLE), a moving object is perceived ahead of a flash even they are aligned at the same location, have shown that the visual system develops the mechanisms of predicting the object's location to compensate for the neural delays, the neural mechanisms of motion-induced location prediction are not still understood well. Here, we investigated the role of neural activity changes in areas MT+ (specialized for motion processing) and the potential contralateral processing preference of MT+ in modulating the FLE. Using transcranial direct current stimulations (tDCS) over the left and right MT+ between pre-and post-tests of the FLE in different motion directions, we measured the effects of tDCS on the FLE. The results found that anodal and cathodal tDCS enhanced and reduced the FLE with the moving object heading to but not deviating from the side of the brain stimulated, respectively, compared to sham tDCS. These findings suggest a causal role of area MT+ in motion-induced location prediction, which may involve the integration of position information.
Collapse
Affiliation(s)
- Wu Wang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Xiao Lei
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Wenxiao Gong
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Kun Liang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
9
|
Hu D, Ison M, Johnston A. Exploring the Common Mechanisms of Motion-Based Visual Prediction. Front Psychol 2022; 13:827029. [PMID: 35391983 PMCID: PMC8981589 DOI: 10.3389/fpsyg.2022.827029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 02/28/2022] [Indexed: 11/13/2022] Open
Abstract
Human vision supports prediction for moving stimuli. Here we take an individual differences approach to investigate whether there could be a common processing rate for motion-based visual prediction across diverse motion phenomena. Motion Induced Spatial Conflict (MISC) refers to an incongruity arising from two edges of a combined stimulus, moving rigidly, but with different apparent speeds. This discrepancy induces an illusory jitter that has been attributed to conflict within a motion prediction mechanism. Its apparent frequency has been shown to correlate with the frequency of alpha oscillations in the brain. We asked what other psychophysical measures might correlate positively with MISC frequency. We measured the correlation between MISC jitter frequency and another three measures that might be linked to motion-based spatial prediction. We demonstrate that the illusory jitter frequency in MISC correlates significantly with the accrual rate of the Motion Induced Position Shift (MIPS) effect - the well-established observation that a carrier movement in a static envelope of a Gabor target leads to an apparent position shift of the envelope in the direction of motion. We did not observe significant correlations with the other two measures - the Adaptation Induced Spatial Shift accrual rate (AISS) and the Smooth Motion Threshold (SMT). These results suggest a shared perceptual rate between MISC and MIPS, implying a common periodic mechanism for motion-based visual prediction.
Collapse
Affiliation(s)
- Dan Hu
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Matias Ison
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
10
|
Hogendoorn H. Perception in real-time: predicting the present, reconstructing the past. Trends Cogn Sci 2022; 26:128-141. [PMID: 34973925 DOI: 10.1016/j.tics.2021.11.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 01/06/2023]
Abstract
We feel that we perceive events in the environment as they unfold in real-time. However, this intuitive view of perception is impossible to implement in the nervous system due to biological constraints such as neural transmission delays. I propose a new way of thinking about real-time perception: at any given moment, instead of representing a single timepoint, perceptual mechanisms represent an entire timeline. On this timeline, predictive mechanisms predict ahead to compensate for delays in incoming sensory input, and reconstruction mechanisms retroactively revise perception when those predictions do not come true. This proposal integrates and extends previous work to address a crucial gap in our understanding of a fundamental aspect of our everyday life: the experience of perceiving the present.
Collapse
Affiliation(s)
- Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, VIC 3010, Australia.
| |
Collapse
|
11
|
Amadeo MB, Tonelli A, Campus C, Gori M. Reduced flash lag illusion in early deaf individuals. Brain Res 2021; 1776:147744. [PMID: 34848173 DOI: 10.1016/j.brainres.2021.147744] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 10/21/2021] [Accepted: 11/24/2021] [Indexed: 11/28/2022]
Abstract
When a brief flash is quickly presented aligned with a moving target, the flash typically appears to lag behind the moving stimulus. This effect is widely known in the literature as a flash-lag illusion (FLI). The flash-lag is an example of a motion-induced position shift. Since auditory deprivation leads to both enhanced visual skills and impaired temporal abilities, both crucial for the perception of the flash-lag effect, here we hypothesized that lack of audition could influence the FLI. 13 early deaf and 18 hearing individuals were tested in a visual FLI paradigm to investigate this hypothesis. As expected, results demonstrated a reduction of the flash-lag effect following early deafness, both in the central and peripheral visual fields. Moreover, only for deaf individuals, there is a positive correlation between the flash-lag effect in the peripheral and central visual field, suggesting that the mechanisms underlying the effect in the center of the visual field expand to the periphery following deafness. Overall, these findings reveal that lack of audition early in life profoundly impacts early visual processing underlying the flash-lag effect.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen 83, 16152 Genova, Italy.
| | - Alessia Tonelli
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen 83, 16152 Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen 83, 16152 Genova, Italy
| | - Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen 83, 16152 Genova, Italy
| |
Collapse
|
12
|
Reddy NN. The implicit sense of agency is not a perceptual effect but is a judgment effect. Cogn Process 2021; 23:1-13. [PMID: 34751857 DOI: 10.1007/s10339-021-01066-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 10/25/2021] [Indexed: 01/02/2023]
Abstract
The sense of agency (SoA) is characterized as the sense of being the causal agent of one's own actions, and it is measured in two forms: explicit and implicit. In the explicit SoA experiments, the participants explicitly report whether they have a sense of control over their actions or whether they or somebody else is the causal agent of seen actions; the implicit SoA experiments study how do participants' agentive or voluntary actions modify perceptual processes (like time, vision, tactility, and audition) without directly asking the participants to explicitly think about their causal agency or sense of control. However, recent implicit SoA literature reported contradictory findings of the relationship between implicit SoA reports and agency states. Thus, I argue that the purported implicit SoA reports are not agency-driven perceptual effects per se but are judgment effects, by showing that (a) the typical operationalizations in implicit SoA domain lead to perceptual uncertainty on the part of the participants, (b) under uncertainty, participants' implicit SoA reports are due to heuristic judgments which are independent of agency states, and (c) under perceptual certainty, the typical implicit SoA reports might not have occurred at all. Thus, I conclude that the instances of implicit SoA are judgments (or response biases)-under uncertainty-rather than perceptual effects.
Collapse
|
13
|
Abstract
This article reviews theoretical and empirical arguments for and against various theories that explain the classic Ponzo illusion and its variants from two different viewpoints concerning the role of perceived depth in size distortions. The first viewpoint argues that all Ponzo-like illusions are driven by perceived depth. The second viewpoint argues that the classic Ponzo illusion is unrelated to depth perception. This review will give special focus to the first viewpoint and consists of three sections. In the first section, the role of the number of pictorial depth cues and previous experience in the strength of all Ponzo-like illusions are discussed. In the second section, we contrast the first viewpoint against the theories that explain the classic Ponzo illusion with mechanisms that are unrelated to depth perception. In the last section, we propose a Bayesian-motivated reconceptualization of Richard Gregory's misapplied size constancy theory that explains Ponzo-variant illusions in terms of prior information and prediction errors. The new account explains why some studies have provided inconsistent evidence for misapplied size constancy theory.
Collapse
|
14
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
15
|
Resolving visual motion through perceptual gaps. Trends Cogn Sci 2021; 25:978-991. [PMID: 34489180 DOI: 10.1016/j.tics.2021.07.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 07/27/2021] [Accepted: 07/30/2021] [Indexed: 01/22/2023]
Abstract
Perceptual gaps can be caused by objects in the foreground temporarily occluding objects in the background or by eyeblinks, which briefly but frequently interrupt visual information. Resolving visual motion across perceptual gaps is particularly challenging, as object position changes during the gap. We examine how visual motion is maintained and updated through externally driven (occlusion) and internally driven (eyeblinks) perceptual gaps. Focusing on both phenomenology and potential mechanisms such as suppression, extrapolation, and integration, we present a framework for how perceptual gaps are resolved over space and time. We finish by highlighting critical questions and directions for future work.
Collapse
|
16
|
Abstract
Time is largely a hidden variable in vision. It is the condition for seeing interesting things such as spatial forms and patterns, colours and movements in the external world, and yet is not meant to be noticed in itself. Temporal aspects of visual processing have received comparatively little attention in research. Temporal properties have been made explicit mainly in measurements of resolution and integration in simple tasks such as detection of spatially homogeneous flicker or light pulses of varying duration. Only through a mechanistic understanding of their basis in retinal photoreceptors and circuits can such measures guide modelling of natural vision in different species and illuminate functional and evolutionary trade-offs. Temporal vision research would benefit from bridging traditions that speak different languages. Towards that goal, I here review studies from the fields of human psychophysics, retinal physiology and neuroethology, with a focus on fundamental constraints set by early vision. Summary: Simple measures of temporal vision such as the critical flicker frequency can be useful for modelling natural vision only if their relationship to photoreceptor responses and retinal processing is understood.
Collapse
Affiliation(s)
- Kristian Donner
- Molecular and Integrative Biosciences Research Programme, Faculty of Biological and Environmental Sciences, University of Helsinki, 00014 Helsinki, Finland
| |
Collapse
|
17
|
Abstract
In addition to the role that our visual system plays in determining what we are seeing right now, visual computations contribute in important ways to predicting what we will see next. While the role of memory in creating future predictions is often overlooked, efficient predictive computation requires the use of information about the past to estimate future events. In this article, we introduce a framework for understanding the relationship between memory and visual prediction and review the two classes of mechanisms that the visual system relies on to create future predictions. We also discuss the principles that define the mapping from predictive computations to predictive mechanisms and how downstream brain areas interpret the predictive signals computed by the visual system. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Nicole C Rust
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104;
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy, University of Chicago, Illinois 60637;
| |
Collapse
|
18
|
Saini H, Jordan H, Fallah M. Color Modulates Feature Integration. Front Psychol 2021; 12:680558. [PMID: 34177733 PMCID: PMC8226161 DOI: 10.3389/fpsyg.2021.680558] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 05/19/2021] [Indexed: 11/21/2022] Open
Abstract
Bayesian models of object recognition propose the resolution of ambiguity through probabilistic integration of prior experience with available sensory information. Color, even when task-irrelevant, has been shown to modulate high-level cognitive control tasks. However, it remains unclear how color modulations affect lower-level perceptual processing. We investigated whether color affects feature integration using the flash-jump illusion. This illusion occurs when an apparent motion stimulus, a rectangular bar appearing at different locations along a motion trajectory, changes color at a single position. Observers misperceive this color change as occurring farther along the trajectory of motion. This mislocalization error is proposed to be produced by a Bayesian perceptual framework dependent on responses in area V4. Our results demonstrated that the color of the flash modulated the magnitude of the flash-jump illusion such that participants reported less of a shift, i.e., a more veridical flash location, for both red and blue flashes, as compared to green and yellow. Our findings extend color-dependent modulation effects found in higher-order executive functions into lower-level Bayesian perceptual processes. Our results also support the theory that feature integration is a Bayesian process. In this framework, color modulations play an inherent and automatic role as different colors have different weights in Bayesian perceptual processing.
Collapse
Affiliation(s)
- Harpreet Saini
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
| | - Heather Jordan
- Centre for Vision Research, York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Mazyar Fallah
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
19
|
Burkitt AN, Hogendoorn H. Predictive Visual Motion Extrapolation Emerges Spontaneously and without Supervision at Each Layer of a Hierarchical Neural Network with Spike-Timing-Dependent Plasticity. J Neurosci 2021; 41:4428-4438. [PMID: 33888603 PMCID: PMC8152614 DOI: 10.1523/jneurosci.2017-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 03/28/2021] [Accepted: 03/31/2021] [Indexed: 11/21/2022] Open
Abstract
The fact that the transmission and processing of visual information in the brain takes time presents a problem for the accurate real-time localization of a moving object. One way this problem might be solved is extrapolation: using an object's past trajectory to predict its location in the present moment. Here, we investigate how a simulated in silico layered neural network might implement such extrapolation mechanisms, and how the necessary neural circuits might develop. We allowed an unsupervised hierarchical network of velocity-tuned neurons to learn its connectivity through spike-timing-dependent plasticity (STDP). We show that the temporal contingencies between the different neural populations that are activated by an object as it moves causes the receptive fields of higher-level neurons to shift in the direction opposite to their preferred direction of motion. The result is that neural populations spontaneously start to represent moving objects as being further along their trajectory than where they were physically detected. Because of the inherent delays of neural transmission, this effectively compensates for (part of) those delays by bringing the represented position of a moving object closer to its instantaneous position in the world. Finally, we show that this model accurately predicts the pattern of perceptual mislocalization that arises when human observers are required to localize a moving object relative to a flashed static object (the flash-lag effect; FLE).SIGNIFICANCE STATEMENT Our ability to track and respond to rapidly changing visual stimuli, such as a fast-moving tennis ball, indicates that the brain is capable of extrapolating the trajectory of a moving object to predict its current position, despite the delays that result from neural transmission. Here, we show how the neural circuits underlying this ability can be learned through spike-timing-dependent synaptic plasticity and that these circuits emerge spontaneously and without supervision. This demonstrates how the neural transmission delays can, in part, be compensated to implement the extrapolation mechanisms required to predict where a moving object is at the present moment.
Collapse
Affiliation(s)
- Anthony N Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Victoria 3010, Australia
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Victoria 3010, Australia
| |
Collapse
|
20
|
Abstract
Purpose Amblyopes suffer a defect in temporal processing, presumably because of a neural delay in their visual processing. By measuring flash-lag effect (FLE), we investigate whether the amblyopic visual system could compensate for the intrinsic neural delay due to visual information transmissions from the retina to the cortex. Methods Eleven adults with amblyopia and 11 controls with normal vision participated in this study. We assessed the monocular FLE magnitude for each subject by using a typical FLE paradigm: a bar moved horizontally, while a flashed bar briefly appeared above or below it. Three luminance contrasts of the flashed bar were tested: 0.2, 0.6, and 1. Results All participants, controls and those with amblyopia, showed a typical FLE. However, the FLE magnitude of participants with amblyopia was significantly shorter than that of the control participants, for both their amblyopic eye (AE) and fellow eye (FE). A nonsignificant difference was found in FLE magnitude between the AE and the FE. Conclusions We demonstrate a reduced FLE both in the AE as well as the FE of patients with amblyopia, suggesting a global visual processing deficit. We suggest it may be attributed to a more limited spatiotemporal extent of facilitatory anticipatory activity within the amblyopic primary visual cortex.
Collapse
Affiliation(s)
- Xi Wang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, Sichuan, China.,McGill Vision Research Unit, Department of Ophthalmology, McGill University, Montreal, Quebec, Canada
| | - Alexandre Reynaud
- McGill Vision Research Unit, Department of Ophthalmology, McGill University, Montreal, Quebec, Canada
| | - Robert F Hess
- McGill Vision Research Unit, Department of Ophthalmology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
21
|
Motion Extrapolation in Visual Processing: Lessons from 25 Years of Flash-Lag Debate. J Neurosci 2020; 40:5698-5705. [PMID: 32699152 DOI: 10.1523/jneurosci.0275-20.2020] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 06/16/2020] [Accepted: 06/18/2020] [Indexed: 11/21/2022] Open
Abstract
Because of the delays inherent in neural transmission, the brain needs time to process incoming visual information. If these delays were not somehow compensated, we would consistently mislocalize moving objects behind their physical positions. Twenty-five years ago, Nijhawan used a perceptual illusion he called the flash-lag effect (FLE) to argue that the brain's visual system solves this computational challenge by extrapolating the position of moving objects (Nijhawan, 1994). Although motion extrapolation had been proposed a decade earlier (e.g., Finke et al., 1986), the proposal that it caused the FLE and functioned to compensate for computational delays was hotly debated in the years that followed, with several alternative interpretations put forth to explain the effect. Here, I argue, 25 years later, that evidence from behavioral, computational, and particularly recent functional neuroimaging studies converges to support the existence of motion extrapolation mechanisms in the visual system, as well as their causal involvement in the FLE. First, findings that were initially argued to challenge the motion extrapolation model of the FLE have since been explained, and those explanations have been tested and corroborated by more recent findings. Second, motion extrapolation explains the spatial shifts observed in several FLE conditions that cannot be explained by alternative (temporal) models of the FLE. Finally, neural mechanisms that actually perform motion extrapolation have been identified at multiple levels of the visual system, in multiple species, and with multiple different methods. I outline key questions that remain, and discuss possible directions for future research.
Collapse
|
22
|
Johnson P, Davies S, Hogendoorn H. Motion extrapolation in the High-Phi illusion: Analogous but dissociable effects on perceived position and perceived motion. J Vis 2020; 20:8. [PMID: 33296460 PMCID: PMC7726593 DOI: 10.1167/jov.20.13.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A range of visual illusions, including the much-studied flash-lag effect, demonstrate that neural signals coding for motion and position interact in the visual system. One interpretation of these illusions is that they are the consequence of motion extrapolation mechanisms in the early visual system. Here, we study the recently reported High-Phi illusion to investigate whether it might be caused by the same underlying mechanisms. In the High-Phi illusion, a rotating texture is abruptly replaced by a new, uncorrelated texture. This leads to the percept of a large illusory jump, which can be forward or backward depending on the duration of the initial motion sequence (the inducer). To investigate whether this motion illusion also leads to illusions of perceived position, in three experiments we asked observers to localize briefly flashed targets presented concurrently with the new texture. Our results replicate the original finding of perceived forward and backward jumps, and reveal an illusion of perceived position. Like the observed effects on illusory motion, these position shifts could be forward or backward, depending on the duration of the inducer: brief inducers caused forward mislocalization, and longer inducers caused backward mislocalization. Additionally, we found that both jumps and mislocalizations scaled in magnitude with the speed of the inducer. Interestingly, forward position shifts were observed at shorter inducer durations than forward jumps. We interpret our results as an interaction of extrapolation and correction-for-extrapolation, and discuss possible mechanisms in the early visual system that might carry out these computations.
Collapse
Affiliation(s)
- Philippa Johnson
- Melbourne School of Psychological Sciences, Parkville, Victoria, Melbourne, Australia.,
| | - Sidney Davies
- Melbourne School of Psychological Sciences, Parkville, Victoria, Melbourne, Australia.,
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, Parkville, Victoria, Melbourne, Australia.,
| |
Collapse
|
23
|
Parker MG, Weightman AP, Tyson SF, Abbott B, Mansell W. Sensorimotor delays in tracking may be compensated by negative feedback control of motion-extrapolated position. Exp Brain Res 2020; 239:189-204. [PMID: 33136186 PMCID: PMC7884356 DOI: 10.1007/s00221-020-05962-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 10/15/2020] [Indexed: 11/28/2022]
Abstract
Sensorimotor delays dictate that humans act on outdated perceptual information. As a result, continuous manual tracking of an unpredictable target incurs significant response delays. However, no such delays are observed for repeating targets such as the sinusoids. Findings of this kind have led researchers to claim that the nervous system constructs predictive, probabilistic models of the world. However, a more parsimonious explanation is that visual perception of a moving target position is systematically biased by its velocity. The resultant extrapolated position could be compared with the cursor position and the difference canceled by negative feedback control, compensating sensorimotor delays. The current study tested whether a position extrapolation model fit human tracking of sinusoid (predictable) and pseudorandom (less predictable) targets better than the non-biased position control model, Twenty-eight participants tracked these targets and the two computational models were fit to the data at 60 fixed loop delay values (simulating sensorimotor delays). We observed that pseudorandom targets were tracked with a significantly greater phase delay than sinusoid targets. For sinusoid targets, the position extrapolation model simulated tracking results more accurately for loop delays longer than 120 ms, thereby confirming its ability to compensate for sensorimotor delays. However, for pseudorandom targets, this advantage arose only after 300 ms, indicating that velocity information is unlikely to be exploited in this way during the tracking of less predictable targets. We conclude that negative feedback control of position is a parsimonious model for tracking pseudorandom targets and that negative feedback control of extrapolated position is a parsimonious model for tracking sinusoidal targets.
Collapse
Affiliation(s)
- Maximilian G Parker
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| | - Andrew P Weightman
- Division of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, UK
| | - Sarah F Tyson
- Division of Nursing, Midwifery and Social Work, The University of Manchester, Manchester, UK
| | - Bruce Abbott
- Psychology Department, Purdue University, Fort Wayne, IN, USA
| | - Warren Mansell
- Division of Psychology and Mental Health, University of Manchester, Manchester, UK
| |
Collapse
|
24
|
Lotter W, Kreiman G, Cox D. A neural network trained for prediction mimics diverse features of biological neurons and perception. NAT MACH INTELL 2020; 2:210-219. [PMID: 34291193 PMCID: PMC8291226 DOI: 10.1038/s42256-020-0170-9] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 03/13/2020] [Indexed: 11/09/2022]
Abstract
Recent work has shown that convolutional neural networks (CNNs) trained on image recognition tasks can serve as valuable models for predicting neural responses in primate visual cortex. However, these models typically require biologically-infeasible levels of labeled training data, so this similarity must at least arise via different paths. In addition, most popular CNNs are solely feedforward, lacking a notion of time and recurrence, whereas neurons in visual cortex produce complex time-varying responses, even to static inputs. Towards addressing these inconsistencies with biology, here we study the emergent properties of a recurrent generative network that is trained to predict future video frames in a self-supervised manner. Remarkably, the resulting model is able to capture a wide variety of seemingly disparate phenomena observed in visual cortex, ranging from single-unit response dynamics to complex perceptual motion illusions, even when subjected to highly impoverished stimuli. These results suggest potentially deep connections between recurrent predictive neural network models and computations in the brain, providing new leads that can enrich both fields.
Collapse
Affiliation(s)
| | - Gabriel Kreiman
- Harvard University, Cambridge, MA, USA
- Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
- Center for Brains, Minds, and Machines (CBMM), Cambridge, MA, USA
| | - David Cox
- Harvard University, Cambridge, MA, USA
- MIT-IBM Watson AI Lab, Cambridge, MA, USA
- IBM Research, Cambridge, MA, USA
| |
Collapse
|
25
|
Hogendoorn H, Burkitt AN. Predictive Coding with Neural Transmission Delays: A Real-Time Temporal Alignment Hypothesis. eNeuro 2019; 6:ENEURO.0412-18.2019. [PMID: 31064839 PMCID: PMC6506824 DOI: 10.1523/eneuro.0412-18.2019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Revised: 03/18/2019] [Accepted: 03/20/2019] [Indexed: 11/29/2022] Open
Abstract
Hierarchical predictive coding is an influential model of cortical organization, in which sequential hierarchical levels are connected by backward connections carrying predictions, as well as forward connections carrying prediction errors. To date, however, predictive coding models have largely neglected to take into account that neural transmission itself takes time. For a time-varying stimulus, such as a moving object, this means that backward predictions become misaligned with new sensory input. We present an extended model implementing both forward and backward extrapolation mechanisms that realigns backward predictions to minimize prediction error. This realignment has the consequence that neural representations across all hierarchical levels become aligned in real time. Using visual motion as an example, we show that the model is neurally plausible, that it is consistent with evidence of extrapolation mechanisms throughout the visual hierarchy, that it predicts several known motion-position illusions in human observers, and that it provides a solution to the temporal binding problem.
Collapse
Affiliation(s)
- Hinze Hogendoorn
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria 3010, Australia
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, 3512 JE, Utrecht, The Netherlands
| | - Anthony N Burkitt
- NeuroEngineering Laboratory, Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia
| |
Collapse
|
26
|
Khoei MA, Ieng SH, Benosman R. Asynchronous Event-Based Motion Processing: From Visual Events to Probabilistic Sensory Representation. Neural Comput 2019; 31:1114-1138. [PMID: 30979350 DOI: 10.1162/neco_a_01191] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple layer of a lateral geniculate nucleus model and a set of three-dimensional Gabor kernels, eventually forming a probabilistic population response. The high temporal resolution of independent and asynchronous local sensory pixels from the ATIS provides a realistic stimulation to study biological motion processing, as well as developing bio-inspired motion processors for computer vision applications. Our study combines two significant theories in neuroscience: event-based stimulation and probabilistic sensory representation. We have modeled how this might be done at the vision level, as well as suggesting this framework as a generic computational principle among different sensory modalities.
Collapse
Affiliation(s)
- Mina A Khoei
- Vision and Natural Computation Team, Vision Institute, Université Pierre et Marie Curie-Paris 6 (UPMC), Sorbonne Université UMR S968 Inserm, UPMC, CHNO des Quinze-Vingts, CNRS UMRS 7210, Paris 75012, France
| | - Sio-Hoi Ieng
- Vision and Natural Computation Team, Vision Institute, Université Pierre et Marie Curie-Paris 6 (UPMC), Sorbonne Université UMR S968 Inserm, UPMC, CHNO des Quinze-Vingts, CNRS UMRS 7210, Paris 75012, France
| | - Ryad Benosman
- Vision and Natural Computation Team, Vision Institute, Université Pierre et Marie Curie-Paris 6 (UPMC), Sorbonne Université UMR S968 Inserm, UPMC, CHNO des Quinze-Vingts, CNRS UMRS 7210, Paris 75012, France; University of Pittsburgh Medical Center, Pittsburgh, PA 15213; and Carnegie Mellon University, Robotics Institute, Pittsburgh, PA 15213, U.S.A.
| |
Collapse
|
27
|
van Heusden E, Harris AM, Garrido MI, Hogendoorn H. Predictive coding of visual motion in both monocular and binocular human visual processing. J Vis 2019; 19:3. [DOI: 10.1167/19.1.3] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Elle van Heusden
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| | - Anthony M. Harris
- Institute of Cognitive Neuroscience, University College London, London, UK Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Marta I. Garrido
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
- School of Mathematics and Physics, The University of Queensland, Brisbane, Australia
- Centre for Advanced Imaging, The University of Queensland, Brisbane, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, The University of Queensland, Brisbane, Australia
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
28
|
Subramaniyan M, Ecker AS, Patel SS, Cotton RJ, Bethge M, Pitkow X, Berens P, Tolias AS. Faster processing of moving compared with flashed bars in awake macaque V1 provides a neural correlate of the flash lag illusion. J Neurophysiol 2018; 120:2430-2452. [PMID: 30365390 PMCID: PMC6295525 DOI: 10.1152/jn.00792.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 08/13/2018] [Accepted: 08/14/2018] [Indexed: 11/22/2022] Open
Abstract
When the brain has determined the position of a moving object, because of anatomical and processing delays the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and real positions of moving objects. A well-known visual illusion-the flash lag effect-points toward such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. To this end, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction, and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash-as required by the postdiction/motion-biasing model-may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli. NEW & NOTEWORTHY We report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.
Collapse
Affiliation(s)
- Manivannan Subramaniyan
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania , Philadelphia, Pennsylvania
| | - Alexander S Ecker
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany
- Bernstein Center for Computational Neuroscience Tübingen , Tübingen , Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine , Houston, Texas
| | - Saumil S Patel
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
| | - R James Cotton
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
| | - Matthias Bethge
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany
- Bernstein Center for Computational Neuroscience Tübingen , Tübingen , Germany
- Max Planck Institute for Biological Cybernetics , Tübingen , Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine , Houston, Texas
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine , Houston, Texas
- Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| | - Philipp Berens
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen , Tübingen , Germany
- Bernstein Center for Computational Neuroscience Tübingen , Tübingen , Germany
- Institute for Ophthalmic Research, University of Tübingen , Tübingen , Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine , Houston, Texas
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas
- Bernstein Center for Computational Neuroscience Tübingen , Tübingen , Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine , Houston, Texas
- Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| |
Collapse
|
29
|
Affiliation(s)
- Peter A. White
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| |
Collapse
|
30
|
Murali G. Now you see me, now you don't: dynamic flash coloration as an antipredator strategy in motion. Anim Behav 2018. [DOI: 10.1016/j.anbehav.2018.06.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
31
|
Hogendoorn H, Burkitt AN. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding. Neuroimage 2018; 171:55-61. [DOI: 10.1016/j.neuroimage.2017.12.063] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 12/06/2017] [Accepted: 12/20/2017] [Indexed: 11/30/2022] Open
|