1
|
Wang SY, Zhang XY, Sun Q. Estimation bias and serial dependence in speed perception. BMC Psychol 2024; 12:598. [PMID: 39472999 PMCID: PMC11520674 DOI: 10.1186/s40359-024-02114-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 10/23/2024] [Indexed: 11/02/2024] Open
Abstract
Studies have found that feature estimates are systematically compressed towards the distribution center, showing a central tendency. Additionally, the estimate of current features is affected by the previously seen feature, showing serial dependence or adaptation effect. However, these all remain unclear in the speed estimation. To address this question, we asked participants to estimate the speed of moving Gabor patches. In Experiment 1, speeds were selected from three uniform distributions with different lower and upper boundaries (i.e., slow, moderate, and fast ranges). In Experiment 2, speeds were arranged in an increasing, uniform, or decreasing distribution. The boundaries of three distributions were the same. The results found that speed estimates were systematically compressed towards the center of the uniform distribution center, showing a central tendency, and its size increased with the range boundaries. However, in the decreasing and increasing distributions, aside from central tendency, the speed estimates were also showed a bias away from the heavy tail of the distributions. Moreover, there was an attractive serial dependence that was not affected by the speed range. In summary, the current study, along with previous studies that reveal a slow-speed bias, comprehensively reveals various estimation biases in speed perception.
Collapse
Affiliation(s)
- Si-Yu Wang
- School of Psychology, Zhejiang Normal University, Jinhua, P. R. China
| | - Xiao-Yan Zhang
- School of Psychology, Zhejiang Normal University, Jinhua, P. R. China
| | - Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, P. R. China.
- Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, P. R. China.
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P. R. China.
| |
Collapse
|
2
|
Wu CC. Impacts of brightness contrast, road environment complexity, travel direction and judgement type on speed perception errors among older adult pedestrians' road-crossing decision-making. Australas J Ageing 2024. [PMID: 39037914 DOI: 10.1111/ajag.13354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 05/23/2024] [Accepted: 05/27/2024] [Indexed: 07/24/2024]
Abstract
OBJECTIVES This study aimed to explore how various factors affect older people's vehicle speed perception to enhance their road safety as pedestrians, focusing on the impact of their cognitive and perceptual abilities on road-crossing decisions. METHODS The study evaluated the effects of brightness contrast (high, medium and low), road complexity (high and low) and vehicle travel direction (same and opposite) on speed perception errors in simulated traffic settings. It involved 38 older participants who estimated the speed of a comparison vehicle under two judgement conditions. RESULTS Findings showed a consistent underestimation of speed in all conditions. A repeated-measure ANOVA revealed that speed perception errors were significantly higher with low brightness contrast, in simpler road environments, with vehicles travelling in the same direction, and when using absolute judgements. CONCLUSIONS These results have practical importance for public safety initiatives, traffic regulation and road design catering to older adults' perceptual needs. They also provide valuable insights for driver training programs for older adults, aimed at enhancing their understanding and management of perceptual biases.
Collapse
Affiliation(s)
- Chia-Chen Wu
- Department of Commercial Design and Management, National Taipei University of Business, Taipei, Taiwan
| |
Collapse
|
3
|
Angeletos Chrysaitis N, Seriès P. 10 years of Bayesian theories of autism: A comprehensive review. Neurosci Biobehav Rev 2023; 145:105022. [PMID: 36581168 DOI: 10.1016/j.neubiorev.2022.105022] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 12/24/2022] [Indexed: 12/27/2022]
Abstract
Ten years ago, Pellicano and Burr published one of the most influential articles in the study of autism spectrum disorders, linking them to aberrant Bayesian inference processes in the brain. In particular, they proposed that autistic individuals are less influenced by their brains' prior beliefs about the environment. In this systematic review, we investigate if this theory is supported by the experimental evidence. To that end, we collect all studies which included comparisons across diagnostic groups or autistic traits and categorise them based on the investigated priors. Our results are highly mixed, with a slight majority of studies finding no difference in the integration of Bayesian priors. We find that priors developed during the experiments exhibited reduced influences more frequently than priors acquired previously, with various studies providing evidence for learning differences between participant groups. Finally, we focus on the methodological and computational aspects of the included studies, showing low statistical power and often inconsistent approaches. Based on our findings, we propose guidelines for future research.
Collapse
Affiliation(s)
- Nikitas Angeletos Chrysaitis
- Institute for Adaptive and Neural Computation, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom.
| | - Peggy Seriès
- Institute for Adaptive and Neural Computation, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom.
| |
Collapse
|
4
|
Freeman TCA, Powell G. Perceived speed at low luminance: Lights out for the Bayesian observer? Vision Res 2022; 201:108124. [PMID: 36193604 DOI: 10.1016/j.visres.2022.108124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 07/21/2022] [Accepted: 09/06/2022] [Indexed: 11/06/2022]
Abstract
To account for perceptual bias, Bayesian models use the precision of early sensory measurements to weight the influence of prior expectations. As precision decreases, prior expectations start to dominate. Important examples come from motion perception, where the slow-motion prior has been used to explain a variety of motion illusions in vision, hearing, and touch, many of which correlate appropriately with threshold measures of underlying precision. However, the Bayesian account seems defeated by the finding that moving objects appear faster in the dark, because most motion thresholds are worse at low luminance. Here we show this is not the case for speed discrimination. Our results show that performance improves at low light levels by virtue of a perceived contrast cue that is more salient in the dark. With this cue removed, discrimination becomes independent of luminance. However, we found perceived speed still increased in the dark for the same observers, and by the same amount. A possible interpretation is that motion processing is therefore not Bayesian, because our findings challenge a key assumption these models make, namely that the accuracy of early sensory measurements is independent of basic stimulus properties like luminance. However, a final experiment restored Bayesian behaviour by adding external noise, making discrimination worse and slowing perceived speed down. Our findings therefore suggest that motion is processed in a Bayesian fashion but based on noisy sensory measurements that also vary in accuracy.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom.
| | - Georgie Powell
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom
| |
Collapse
|
5
|
Zhang LQ, Stocker AA. Prior Expectations in Visual Speed Perception Predict Encoding Characteristics of Neurons in Area MT. J Neurosci 2022; 42:2951-2962. [PMID: 35169018 PMCID: PMC8985856 DOI: 10.1523/jneurosci.1920-21.2022] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022] Open
Abstract
Bayesian inference provides an elegant theoretical framework for understanding the characteristic biases and discrimination thresholds in visual speed perception. However, the framework is difficult to validate because of its flexibility and the fact that suitable constraints on the structure of the sensory uncertainty have been missing. Here, we demonstrate that a Bayesian observer model constrained by efficient coding not only well explains human visual speed perception but also provides an accurate quantitative account of the tuning characteristics of neurons known for representing visual speed. Specifically, we found that the population coding accuracy for visual speed in area MT ("neural prior") is precisely predicted by the power-law, slow-speed prior extracted from fitting the Bayesian observer model to psychophysical data ("behavioral prior") to the point that the two priors are indistinguishable in a cross-validation model comparison. Our results demonstrate a quantitative validation of the Bayesian observer model constrained by efficient coding at both the behavioral and neural levels.SIGNIFICANCE STATEMENT Statistical regularities of the environment play an important role in shaping both neural representations and perceptual behavior. Most previous work addressed these two aspects independently. Here we present a quantitative validation of a theoretical framework that makes joint predictions for neural coding and behavior, based on the assumption that neural representations of sensory information are efficient but also optimally used in generating a percept. Specifically, we demonstrate that the neural tuning characteristics for visual speed in brain area MT are precisely predicted by the statistical prior expectations extracted from psychophysical data. As such, our results provide a normative link between perceptual behavior and the neural representation of sensory information in the brain.
Collapse
|
6
|
Patricio Décima A, Fernando Barraza J, López-Moliner J. The perceptual dynamics of the contrast induced speed bias. Vision Res 2021; 191:107966. [PMID: 34808549 DOI: 10.1016/j.visres.2021.107966] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/15/2021] [Accepted: 10/17/2021] [Indexed: 11/25/2022]
Abstract
In this article we present a temporal extension of the slow motion prior model to generate predictions regarding the temporal evolution of the contrast induced speed bias. We further tested these predictions using a novel experimental paradigm that allows us to measure the dynamic perceptual difference between stimuli through a series of manual pursuit open loop tasks. Results show good agreement with our model's predictions. The main findings reveal that hand speed dynamics are affected by stimulus contrast in a way that is consistent with a dynamic model of motion perception that assumes a slow motion prior. The proposed model also confirms observations made in previous studies that suggest that motion bias persisted even at high contrast as a consequence of the dynamics of the slow motion prior.
Collapse
Affiliation(s)
| | - José Fernando Barraza
- Dpto. Luminotecnia, Luz y Visión "Herberto C. Bühler" (DLLyV), FACET, UNT, Argentina; Instituto de Investigación en Luz, Ambiente y Visión (ILAV), CONICET-UNT, Argentina
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Passeig de la Vall d'Hebron 171, 08035 Barcelona, Catalonia, Spain
| |
Collapse
|
7
|
Rideaux R, Welchman AE. Exploring and explaining properties of motion processing in biological brains using a neural network. J Vis 2021; 21:11. [PMID: 33625466 PMCID: PMC7910626 DOI: 10.1167/jov.21.2.11] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual motion perception underpins behaviors ranging from navigation to depth perception and grasping. Our limited access to biological systems constrains our understanding of how motion is processed within the brain. Here we explore properties of motion perception in biological systems by training a neural network to estimate the velocity of image sequences. The network recapitulates key characteristics of motion processing in biological brains, and we use our access to its structure to explore and understand motion (mis)perception. We find that the network captures the biological response to reverse-phi motion in terms of direction. We further find that it overestimates and underestimates the speed of slow and fast reverse-phi motion, respectively, because of the correlation between reverse-phi motion and the spatiotemporal receptive fields tuned to motion in opposite directions. Second, we find that the distribution of spatiotemporal tuning properties in the V1 and middle temporal (MT) layers of the network are similar to those observed in biological systems. We then show that, in comparison to MT units tuned to fast speeds, those tuned to slow speeds primarily receive input from V1 units tuned to high spatial frequency and low temporal frequency. Next, we find that there is a positive correlation between the pattern-motion and speed selectivity of MT units. Finally, we show that the network captures human underestimation of low coherence motion stimuli, and that this is due to pooling of noise and signal motion. These findings provide biologically plausible explanations for well-known phenomena and produce concrete predictions for future psychophysical and neurophysiological experiments.
Collapse
Affiliation(s)
- Reuben Rideaux
- Department of Psychology, University of Cambridge, Cambridge, UK.,
| | | |
Collapse
|
8
|
Vacher J, Meso AI, Perrinet LU, Peyré G. Bayesian Modeling of Motion Perception Using Dynamical Stochastic Textures. Neural Comput 2018; 30:3355-3392. [DOI: 10.1162/neco_a_01142] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The study presented here details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as three-dimensional gaussian fields. Second, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real-time, on-the-fly texture synthesis using time-discretized autoregressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log likelihood of the probability density. The log likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicate previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a gaussian likelihood centered at the true speed and a spatial frequency dependent width with a “slow-speed prior” successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.
Collapse
Affiliation(s)
- Jonathan Vacher
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France; UNIC, Gif-sur-Yvette 91190, France; and CNRS, France
| | - Andrew Isaac Meso
- Institut des Neurosciences de la Timone, Marseille 13005, France, and Faculty of Science and Technology, Bournemouth University, Poole BH12 5BB, U.K
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, Marseille 13005, France, and CNRS, France
| | - Gabriel Peyré
- Département de Mathématique et Applications, École Normale Supérieure, Paris 75005, France, and CNRS, France
| |
Collapse
|
9
|
A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration. Neuron 2018; 99:194-206.e5. [PMID: 29937278 DOI: 10.1016/j.neuron.2018.05.040] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 03/23/2018] [Accepted: 05/30/2018] [Indexed: 01/06/2023]
Abstract
Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration.
Collapse
|
10
|
Freeman TCA, Culling JF, Akeroyd MA, Brimijoin WO. Auditory compensation for head rotation is incomplete. J Exp Psychol Hum Percept Perform 2017; 43:371-380. [PMID: 27841453 PMCID: PMC5289217 DOI: 10.1037/xhp0000321] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Revised: 08/24/2016] [Accepted: 09/01/2016] [Indexed: 01/25/2023]
Abstract
Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These "extraretinal signals" compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the "movement gain" relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Michael A Akeroyd
- Medical Research Council Institute of Hearing Research, University of Nottingham
| | - W Owen Brimijoin
- Medical Research Council/Chief Scientist Office Institute of Hearing Research-Scottish Section, Glasgow Royal Infirmary
| |
Collapse
|
11
|
Abstract
According to Bayesian models, perception and cognition depend on the optimal combination of noisy incoming evidence with prior knowledge of the world. Individual differences in perception should therefore be jointly determined by a person’s sensitivity to incoming evidence and his or her prior expectations. It has been proposed that individuals with autism have flatter prior distributions than do nonautistic individuals, which suggests that prior variance is linked to the degree of autistic traits in the general population. We tested this idea by studying how perceived speed changes during pursuit eye movement and at low contrast. We found that individual differences in these two motion phenomena were predicted by differences in thresholds and autistic traits when combined in a quantitative Bayesian model. Our findings therefore support the flatter-prior hypothesis and suggest that individual differences in prior expectations are more systematic than previously thought. In order to be revealed, however, individual differences in sensitivity must also be taken into account.
Collapse
|
12
|
Tong J, Ngo V, Goldreich D. Tactile length contraction as Bayesian inference. J Neurophysiol 2016; 116:369-79. [PMID: 27121574 DOI: 10.1152/jn.00029.2016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 04/24/2016] [Indexed: 11/22/2022] Open
Abstract
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.
Collapse
Affiliation(s)
- Jonathan Tong
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Vy Ngo
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Daniel Goldreich
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada; McMaster Integrative Neuroscience Discovery and Study, Hamilton, Ontario, Canada; and McMaster University Origins Institute, Hamilton, Ontario, Canada
| |
Collapse
|
13
|
Abstract
Object motion in natural scenes results in visual stimuli with a rich and broad spatiotemporal frequency spectrum. While the question of how the visual system detects and senses motion energies at different spatial and temporal frequencies has been fairly well studied, it is unclear how the visual system integrates this information to form coherent percepts of object motion. We applied a combination of tailored psychophysical experiments and predictive modeling to address this question with regard to perceived motion in a given direction (i.e., stimulus speed). We tested human subjects in a discrimination experiment using stimuli that selectively targeted four distinct spatiotemporally tuned channels with center frequencies consistent with a common speed. We first characterized subjects' responses to stimuli that targeted only individual channels. Based on these measurements, we then predicted subjects' psychometric functions for stimuli that targeted multiple channels simultaneously. Specifically, we compared predictions of three Bayesian observer models that either optimally integrated the information across all spatiotemporal channels, or only used information from the most reliable channel, or formed an average percept across channels. Only the model with optimal integration was successful in accounting for the data. Furthermore, the proposed channel model provides an intuitive explanation for the previously reported spatial frequency dependence of perceived speed of coherent object motion. Finally, our findings indicate that a prior expectation for slow speeds is added to the inference process only after the sensory information is combined and integrated.
Collapse
|