1
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. Nat Commun 2025; 16:3868. [PMID: 40274770 DOI: 10.1038/s41467-025-58797-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 03/28/2025] [Indexed: 04/26/2025] Open
Abstract
Motion can only be defined relative to a reference frame; yet it remains unclear which reference frame guides perception. A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, progressively inferring structured reference frames and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
2
|
Mazuz Y, Hadad BS, Ganel T. Intact Susceptibility to Visual Illusions in Autistic Individuals. Autism Res 2025. [PMID: 40259703 DOI: 10.1002/aur.70044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2024] [Revised: 03/20/2025] [Accepted: 04/03/2025] [Indexed: 04/23/2025]
Abstract
Altered sensory perception, a core characteristic of autism, has been attributed to attenuated use of stimuli context or prior information in perception. Reduced susceptibility to perceptual illusions was extensively used to support these accounts for autistic perception. However, empirical evidence has been inconsistent. The current study systematically investigated susceptibility to size illusions in autistic and non-autistic individuals using a standardized psychophysical battery. Eighty-one participants, 41 autistic and 40 non-autistic individuals, completed the Ben-Gurion University Test for Perceptual Illusions (BTPI), measuring susceptibility to the Ponzo, Ebbinghaus, and Height-width illusions. The results demonstrate clear evidence for susceptibility to illusions in the perception of size both in the autistic and non-autistic groups. No significant differences were found between groups in the magnitude of illusion on the perceived size, or on the perceptual resolutions of size (discrimination thresholds) in any of the illusory settings tested. The results challenge current theories suggesting reduced reliance on priors or enhanced sensory measurement in autism. Instead, using robust psychophysical methods, the study provides clear evidence for autistic people forming priors and using long-term knowledge in perception.
Collapse
Affiliation(s)
- Yarden Mazuz
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Bat-Sheva Hadad
- Department of Special Education and the Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
3
|
Yu K, Vanpaemel W, Tuerlinckx F, Zaman J. The probabilistic and dynamic nature of perception in human generalization behavior. iScience 2025; 28:112228. [PMID: 40230534 PMCID: PMC11995087 DOI: 10.1016/j.isci.2025.112228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 01/07/2025] [Accepted: 03/12/2025] [Indexed: 04/16/2025] Open
Abstract
Generalization theories traditionally overlook how our mental representations dynamically change in the process of transferring learned knowledge to new contexts. We integrated perceptual and generalization theories into a computational model using data from 80 participants who underwent Pavlovian fear conditioning experiments. The model analyzed continuous measures of perception and fear generalization to understand their relationship. Our findings revealed large individual variations in perceptual processes that directly influence generalization patterns. By examining how perceptual and generalization mechanisms work together, we uncovered their combined role in producing generalization behavior. This research illuminates the probabilistic perceptual foundations underlying individual differences in generalization, emphasizing the crucial integration between perceptual and generalization processes. Understanding this relationship enhances our knowledge of generalization behavior and has potential implications for various cognitive domains including categorization, motor learning, language processing, and face recognition-all of which rely on generalization as a fundamental cognitive process.
Collapse
Affiliation(s)
- Kenny Yu
- Quantitative Psychology and Individual Differences, KU Leuven, 3000 Leuven, Belgium
| | - Wolf Vanpaemel
- Quantitative Psychology and Individual Differences, KU Leuven, 3000 Leuven, Belgium
| | - Francis Tuerlinckx
- Quantitative Psychology and Individual Differences, KU Leuven, 3000 Leuven, Belgium
| | - Jonas Zaman
- REVAL Rehabilitation Research, Faculty of Rehabilitation Sciences, UHasselt, 3590 Diepenbeek, Belgium
- Centre for Learning and Experimental Psychopathology, KU Leuven, 3000 Leuven, Belgium
- Center for Translational Neuro- and Behavioral Sciences, University of Duisburg-Essen, 47057 Duisburg, Germany
| |
Collapse
|
4
|
Sun Y, Sommer W, Sun Q, Cao X. Holistic and local processing occur simultaneously for inverted faces: Evidence from behavior and computational modeling. Atten Percept Psychophys 2025; 87:922-935. [PMID: 40102329 DOI: 10.3758/s13414-025-03042-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2025] [Indexed: 03/20/2025]
Abstract
Whether inverted faces are processed locally or involve holistic processing has been debated for several years. This study conducted two experiments to explore the extent of holistic processing of inverted faces. Experiment 1 adopted a face-congruency paradigm that orthogonally manipulated stimulus congruency and orientation. Experiment 2 employed the complete congruency paradigm to test whether misalignment effects of inverted faces are related to holistic processing. The results of both experiments consistently demonstrated that inverted faces are processed not only locally but also holistically, and that misalignment disrupts the holistic processing of inverted faces. Subsequently computational modeling showed that in the congruent condition, the contributions of holistic and local information in inverted face processing performance were 24% and 76%, respectively, whereas in the incongruent condition, they were 10% and 90%, respectively. Together, the present study reveals that also inverted faces are processed holistically, albeit to a lower degree than upright faces.
Collapse
Affiliation(s)
- Yini Sun
- Faculty of Psychology, Zhejiang Normal University, Jinhua, China
| | - Werner Sommer
- Faculty of Psychology, Zhejiang Normal University, Jinhua, China
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
- Department of Physics and Lifescience Neuroimaging Center, Hong Kong Baptist University, Hong Kong, China
- Faculty of Education, National University of Malaysia, Kuala Lumpur, Malaysia
| | - Qi Sun
- Faculty of Psychology, Zhejiang Normal University, Jinhua, China
- Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
| | - Xiaohua Cao
- Faculty of Psychology, Zhejiang Normal University, Jinhua, China.
| |
Collapse
|
5
|
Langlois TA, Charlton JA, Goris RLT. Bayesian inference by visuomotor neurons in the prefrontal cortex. Proc Natl Acad Sci U S A 2025; 122:e2420815122. [PMID: 40146856 PMCID: PMC12002263 DOI: 10.1073/pnas.2420815122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 02/20/2025] [Indexed: 03/29/2025] Open
Abstract
Perceptual judgments of the environment emerge from the concerted activity of neural populations in decision-making areas downstream of the sensory cortex. When the sensory input is ambiguous, perceptual judgments can be biased by prior expectations shaped by environmental regularities. These effects are examples of Bayesian inference, a reasoning method in which prior knowledge is leveraged to optimize uncertain decisions. However, it is not known how decision-making circuits combine sensory signals and prior expectations to form a perceptual decision. Here, we study neural population activity in the prefrontal cortex of macaque monkeys trained to report perceptual judgments of ambiguous visual stimuli under two different stimulus distributions. We isolate the component of the neural population response that represents the formation of the perceptual decision (the decision variable, DV), and find that its dynamical evolution reflects the integration of sensory signals and prior expectations. Prior expectations impact the DV's trajectory both before and during stimulus presentation such that DV trajectories with a smaller dynamic range result in more biased and less sensitive perceptual decisions. We show that these results resemble a specific variant of Bayesian inference known as approximate hierarchical inference. Our findings expand our understanding of the mechanisms by which prefrontal circuits can execute Bayesian inference.
Collapse
Affiliation(s)
- Thomas A. Langlois
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Cambridge, MA02139
| | - Julie A. Charlton
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ08540
| | - Robbe L. T. Goris
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX78712
| |
Collapse
|
6
|
Lim C, Vishwanath D, Domini F. Sensorimotor adaptation reveals systematic biases in 3D perception. Sci Rep 2025; 15:3847. [PMID: 39885329 PMCID: PMC11782619 DOI: 10.1038/s41598-025-88214-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Accepted: 01/25/2025] [Indexed: 02/01/2025] Open
Abstract
The existence of biases in visual perception and their impact on visually guided actions has long been a fundamental yet unresolved question. Evidence revealing perceptual or visuomotor biases has typically been disregarded because such biases in spatial judgments can often be attributed to experimental measurement confounds. To resolve this controversy, we leveraged the visuomotor system's adaptation mechanism - triggered only by a discrepancy between visual estimates and sensory feedback - to directly indicate whether systematic errors in perceptual and visuomotor spatial judgments exist. To resolve this controversy, we leveraged the adaptive mechanisms of the visuomotor system to directly reveal whether systematic biases or errors in perceptual and visuomotor spatial judgments exist. In a within-subject study (N=24), participants grasped a virtual 3D object with varying numbers of depth cues (single vs. multiple) while receiving haptic feedback. The resulting visuomotor adaptations and aftereffects demonstrated that the planned grip size, determined by the visually perceived depth of the object, was consistently overestimated. This overestimation intensified when multiple cues were present, despite no actual change in physical depth. These findings conclusively confirm the presence of inherent biases in visual estimates for both perception and action, and highlight the potential use of visuomotor adaptation as a novel tool for understanding perceptual biases.
Collapse
Affiliation(s)
- Chaeeun Lim
- Brown University, Cognitive and Psychological Sciences, Providence, 02912, USA.
| | - Dhanraj Vishwanath
- University of St Andrews, School of Psychology and Neuroscience, St Andrews, KY16 9AJ, UK
| | - Fulvio Domini
- Brown University, Cognitive and Psychological Sciences, Providence, 02912, USA
| |
Collapse
|
7
|
Wei XX, Woodford M. Representational geometry explains puzzling error distributions in behavioral tasks. Proc Natl Acad Sci U S A 2025; 122:e2407540122. [PMID: 39854237 PMCID: PMC11789072 DOI: 10.1073/pnas.2407540122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 12/09/2024] [Indexed: 01/26/2025] Open
Abstract
Measuring and interpreting errors in behavioral tasks is critical for understanding cognition. Conventional wisdom assumes that encoding/decoding errors for continuous variables in behavioral tasks should naturally have Gaussian distributions, so that deviations from normality in the empirical data indicate the presence of more complex sources of noise. This line of reasoning has been central for prior research on working memory. Here, we reassess this assumption and find that even in ideal observer models with Gaussian encoding noise, the error distribution is generally non-Gaussian, contrary to the commonly held belief. Critically, we find that the shape of the error distribution is determined by the geometrical structure of the encoding manifold via a simple rule. In the case of a high-dimensional geometry, the error distributions naturally exhibit flat tails. Using this insight, we apply our theory to visual short-term memory tasks, and find that it can account for a large array of experimental data with only two free parameters. Our results challenge the dominant view in the mechanisms and capacity constraints of working memory systems. They instead suggest that the Bayesian framework, which explains various aspects of perceptual behavior, also provides an excellent account of working memory. Overall, our results establish a direct connection between neural manifold geometry and behavior, and call attention to the geometry of the representation as a critically important, yet underappreciated factor in determining the character of errors in human behavior.
Collapse
Affiliation(s)
- Xue-Xin Wei
- Department of Neuroscience, The University of Texas at Austin, Austin, TX78712
- Department of Psychology, The University of Texas at Austin, Austin, TX78712
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX78712
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX78712
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, TX78712
| | | |
Collapse
|
8
|
Sheth J, Collina JS, Piasini E, Kording KP, Cohen YE, Geffen MN. The interplay of uncertainty, relevance and learning influences auditory categorization. Sci Rep 2025; 15:3348. [PMID: 39870756 PMCID: PMC11772889 DOI: 10.1038/s41598-025-86856-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 01/14/2025] [Indexed: 01/29/2025] Open
Abstract
Auditory perception requires categorizing sound sequences, such as speech or music, into classes, such as syllables or notes. Auditory categorization depends not only on the acoustic waveform, but also on variability and uncertainty in how the listener perceives the sound - including sensory and stimulus uncertainty, the listener's estimated relevance of the particular sound to the task, and their ability to learn the past statistics of the acoustic environment. Whereas these factors have been studied in isolation, whether and how these factors interact to shape categorization remains unknown. Here, we measured human participants' performance on a multi-tone categorization task and modeled each participant's behavior using a Bayesian framework. Task-relevant tones contributed more to category choice than task-irrelevant tones, confirming that participants combined information about sensory features with task relevance. Conversely, participants' poor estimates of task-relevant tones or high-sensory uncertainty adversely impacted category choice. Learning the statistics of sound category over both short and long timescales also affected decisions, biasing the decisions toward the overrepresented category. The magnitude of this effect correlated inversely with participants' relevance estimates. Our results demonstrate that individual participants idiosyncratically weigh sensory uncertainty, task relevance, and statistics over both short and long timescales, providing a novel understanding of and a computational framework for how sensory decisions are made under several simultaneous behavioral demands.
Collapse
Affiliation(s)
- Janaki Sheth
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jared S Collina
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Eugenio Piasini
- Department of Neuroscience, International School for Advanced Studies, Trieste, Italy
| | - Konrad P Kording
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
9
|
Yu K, Vanpaemel W, Tuerlinckx F, Zaman J. The representational instability in the generalization of fear learning. NPJ SCIENCE OF LEARNING 2024; 9:78. [PMID: 39702746 PMCID: PMC11659557 DOI: 10.1038/s41539-024-00287-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 11/28/2024] [Indexed: 12/21/2024]
Abstract
Perception and perceptual memory play crucial roles in fear generalization, yet their dynamic interaction remains understudied. This research (N = 80) explored their relationship through a classical differential conditioning experiment. Results revealed that while fear context perception fluctuates over time with a drift effect, perceptual memory remains stable, creating a disjunction between the two systems. Surprisingly, this disjunction does not significantly impact fear generalization behavior. Although most participants demonstrated generalization aligned with perceptual rather than physical stimulus distances, incorporating perceptual memory data into perceptual distance calculations did not enhance model performance. This suggests a potential shift in the mapping of the perceptual memory component of fear context, occurring alongside perceptual dynamics. Overall, this work provides evidence for understanding fear generalization behavior through different stimulus representational processes. Such mechanistic investigations can enhance our understanding of how individuals behave when facing threats and potentially aid in developing mechanism-specific diagnoses and treatments.
Collapse
Affiliation(s)
| | | | | | - Jonas Zaman
- KU Leuven, Leuven, Belgium
- University of Hasselt, Hasselt, Belgium
- University of Duisburg-Essen, Essen, Germany
| |
Collapse
|
10
|
Saddler MR, McDermott JH. Models optimized for real-world tasks reveal the task-dependent necessity of precise temporal coding in hearing. Nat Commun 2024; 15:10590. [PMID: 39632854 PMCID: PMC11618365 DOI: 10.1038/s41467-024-54700-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 11/18/2024] [Indexed: 12/07/2024] Open
Abstract
Neurons encode information in the timing of their spikes in addition to their firing rates. Spike timing is particularly precise in the auditory nerve, where action potentials phase lock to sound with sub-millisecond precision, but its behavioral relevance remains uncertain. We optimized machine learning models to perform real-world hearing tasks with simulated cochlear input, assessing the precision of auditory nerve spike timing needed to reproduce human behavior. Models with high-fidelity phase locking exhibited more human-like sound localization and speech perception than models without, consistent with an essential role in human hearing. However, the temporal precision needed to reproduce human-like behavior varied across tasks, as did the precision that benefited real-world task performance. These effects suggest that perceptual domains incorporate phase locking to different extents depending on the demands of real-world hearing. The results illustrate how optimizing models for realistic tasks can clarify the role of candidate neural codes in perception.
Collapse
Affiliation(s)
- Mark R Saddler
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA.
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, MA, USA.
| |
Collapse
|
11
|
Cusimano M, Hewitt LB, McDermott JH. Listening with generative models. Cognition 2024; 253:105874. [PMID: 39216190 DOI: 10.1016/j.cognition.2024.105874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 03/31/2024] [Accepted: 07/03/2024] [Indexed: 09/04/2024]
Abstract
Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled 'rich falsification', revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.
Collapse
Affiliation(s)
- Maddie Cusimano
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America.
| | - Luke B Hewitt
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America; McGovern Institute, Massachusetts Institute of Technology, United States of America; Center for Brains Minds and Machines, Massachusetts Institute of Technology, United States of America; Speech and Hearing Bioscience and Technology, Harvard University, United States of America.
| |
Collapse
|
12
|
Takamuku S, Arslanova I, Gomi H, Haggard P. Multidigit tactile perception II: perceptual weighting during integration follows a leading-finger priority. J Neurophysiol 2024; 132:1805-1819. [PMID: 39441210 DOI: 10.1152/jn.00105.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 09/18/2024] [Accepted: 09/27/2024] [Indexed: 10/25/2024] Open
Abstract
When we run our hand across a surface, each finger typically repeats the sensory stimulation that the leading finger has already experienced. Because of this redundancy, the leading finger may attract more attention and contribute more strongly when tactile signals are integrated across fingers to form an overall percept. To test this hypothesis, we re-analyzed data collected in a previous study (Arslanova I, Takamuku S, Gomi H, Haggard P, J Neurophysiol 128: 418-433, 2022), where two probes were moved in different directions on two different fingerpads and participants reported the probes' average direction. Here, we evaluate the relative contribution of each finger to the percept and examine whether multidigit integration gives priority to the leading finger. Although the hand actually remained static in these experiments, a "functional leading finger" could be defined with reference to the average direction of the stimuli and the direction of hand-object relative motion that this implied. When participants averaged the motion direction across fingers of the same hand, the leading finger received a higher weighting than the nonleading finger, even though this biased estimate of average direction. Importantly, this bias disappeared when averaging motion direction across the two hands. Both the reported average direction and its systematic relation to the difference between the individual stimulus directions were explained by a model of motion integration in which the sensory weighting of stimuli depends on the directions of the applied stimuli. Our finding supports the hypothesis that the leading finger, which often receives novel information in natural hand-object interactions, is prioritized in forming our tactile perception.NEW & NOTEWORTHY The capacity of the tactile system to process multiple simultaneous stimuli is restricted. One solution could be to prioritize input from more informative sources. Here, we show that sensory weighting accorded to each finger during multidigit touch is biased in a direction-dependent manner when different motions are delivered to the fingers of the same hand. We argue that tactile inputs are weighted based on purely geometric information to prioritize "novel" information from the leading finger.
Collapse
Affiliation(s)
- Shinya Takamuku
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Irena Arslanova
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Department of Psychology, Royal Holloway University of London, Egham, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
13
|
Bévalot C, Meyniel F. A dissociation between the use of implicit and explicit priors in perceptual inference. COMMUNICATIONS PSYCHOLOGY 2024; 2:111. [PMID: 39592724 PMCID: PMC11599933 DOI: 10.1038/s44271-024-00162-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 11/15/2024] [Indexed: 11/28/2024]
Abstract
The brain constantly uses prior knowledge of the statistics of its environment to shape perception. These statistics are often implicit (not directly observable) and learned incrementally from observation, but they can also be explicitly communicated to the observer, especially in humans. Here, we show that priors are used differently in human perceptual inference depending on whether they are explicit or implicit in the environment. Bayesian modeling of learning and perception revealed that the weight of the sensory likelihood in perceptual decisions was highly correlated across participants between tasks with implicit and explicit priors, and slightly stronger in the implicit task. By contrast, the weight of priors was much less correlated across tasks, and it was markedly smaller for explicit priors. The model comparison also showed that different computations underpinned perceptual decisions depending on the origin of the priors. This dissociation may resolve previously conflicting results about the appropriate use of priors in human perception.
Collapse
Affiliation(s)
- Caroline Bévalot
- Cognitive Neuroimaging Unit, NeuroSpin (INSERM-CEA), University of Paris-Saclay, Gif-sur-Yvette, France.
- Sorbonne University, Doctoral College, Paris, France.
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, NeuroSpin (INSERM-CEA), University of Paris-Saclay, Gif-sur-Yvette, France.
- GHU Paris, psychiatrie et neurosciences, Hôpital Saint-Anne, institut de neuromodulation, Paris, France.
| |
Collapse
|
14
|
Geadah V, Barello G, Greenidge D, Charles AS, Pillow JW. Sparse-Coding Variational Autoencoders. Neural Comput 2024; 36:2571-2601. [PMID: 39383030 DOI: 10.1162/neco_a_01715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 05/28/2024] [Indexed: 10/11/2024]
Abstract
The sparse coding model posits that the visual system has evolved to efficiently code natural stimuli using a sparse set of features from an overcomplete dictionary. The original sparse coding model suffered from two key limitations; however: (1) computing the neural response to an image patch required minimizing a nonlinear objective function via recurrent dynamics and (2) fitting relied on approximate inference methods that ignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution inspired by the variational autoencoder (VAE) framework. We introduce the sparse coding variational autoencoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parameterized by a deep neural network. This recognition model provides a neurally plausible feedforward implementation for the mapping from image patches to neural activities and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of gaussian, and the decoder network is a linear projection instead of a deep network. We fit the SVAE to natural image data under different assumed prior distributions and show that it obtains higher test performance than previous fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway.
Collapse
Affiliation(s)
- Victor Geadah
- Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Gabriel Barello
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, U.S.A.
| | - Daniel Greenidge
- Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Adam S Charles
- Department of Biomedical Engineering, Department Center for Imaging Science, and Department Kavli Neuroscience Discovery Institute, Baltimore, MD 21218, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
15
|
Hedrich NL, Schulz E, Hall-McMaster S, Schuck NW. An inductive bias for slowly changing features in human reinforcement learning. PLoS Comput Biol 2024; 20:e1012568. [PMID: 39585903 PMCID: PMC11637442 DOI: 10.1371/journal.pcbi.1012568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 12/12/2024] [Accepted: 10/17/2024] [Indexed: 11/27/2024] Open
Abstract
Identifying goal-relevant features in novel environments is a central challenge for efficient behaviour. We asked whether humans address this challenge by relying on prior knowledge about common properties of reward-predicting features. One such property is the rate of change of features, given that behaviourally relevant processes tend to change on a slower timescale than noise. Hence, we asked whether humans are biased to learn more when task-relevant features are slow rather than fast. To test this idea, 295 human participants were asked to learn the rewards of two-dimensional bandits when either a slowly or quickly changing feature of the bandit predicted reward. Across two experiments and one preregistered replication, participants accrued more reward when a bandit's relevant feature changed slowly, and its irrelevant feature quickly, as compared to the opposite. We did not find a difference in the ability to generalise to unseen feature values between conditions. Testing how feature speed could affect learning with a set of four function approximation Kalman filter models revealed that participants had a higher learning rate for the slow feature, and adjusted their learning to both the relevance and the speed of feature changes. The larger the improvement in participants' performance for slow compared to fast bandits, the more strongly they adjusted their learning rates. These results provide evidence that human reinforcement learning favours slower features, suggesting a bias in how humans approach reward learning.
Collapse
Affiliation(s)
- Noa L. Hedrich
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
- Einstein Center for Neurosciences Berlin, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Eric Schulz
- Max Planck Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Helmholtz Institute for Human-Centered AI, Helmholtz Center Munich, Neuherberg, Germany
| | - Sam Hall-McMaster
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Department of Psychology, Harvard University, Cambridge, Massachussets, United States of America
| | - Nicolas W. Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
| |
Collapse
|
16
|
Negen J. No evidence for a difference in Bayesian reasoning for egocentric versus allocentric spatial cognition. PLoS One 2024; 19:e0312018. [PMID: 39388501 PMCID: PMC11466427 DOI: 10.1371/journal.pone.0312018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 09/30/2024] [Indexed: 10/12/2024] Open
Abstract
Bayesian reasoning (i.e. prior integration, cue combination, and loss minimization) has emerged as a prominent model for some kinds of human perception and cognition. The major theoretical issue is that we do not yet have a robust way to predict when we will or will not observe Bayesian effects in human performance. Here we tested a proposed divide in terms of Bayesian reasoning for egocentric spatial cognition versus allocentric spatial cognition (self-centered versus world-centred). The proposal states that people will show stronger Bayesian reasoning effects when it is possible to perform the Bayesian calculations within the egocentric frame, as opposed to requiring an allocentric frame. Three experiments were conducted with one egocentric-allowing condition and one allocentric-requiring condition but otherwise matched as closely as possible. No difference was found in terms of prior integration (Experiment 1), cue combination (Experiment 2), or loss minimization (Experiment 3). The contrast in previous reports, where Bayesian effects are present in many egocentric-allowing tasks while they are absent in many allocentric-requiring tasks, is likely due to other differences between the tasks-for example, the way allocentric-requiring tasks are often more complex and memory intensive.
Collapse
Affiliation(s)
- James Negen
- Psychology Department, Liverpool John Moores University, Liverpool, United Kingdom
| |
Collapse
|
17
|
Zhou L, Liu Y, Jiang Y, Wang W, Xu P, Zhou K. The distinct development of stimulus and response serial dependence. Psychon Bull Rev 2024; 31:2137-2147. [PMID: 38379075 PMCID: PMC11543724 DOI: 10.3758/s13423-024-02474-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/22/2024]
Abstract
Serial dependence (SD) is a phenomenon wherein current perceptions are biased by the previous stimulus and response. This helps to attenuate perceptual noise and variability in sensory input and facilitates stable ongoing perceptions of the environment. However, little is known about the developmental trajectory of SD. This study investigates how the stimulus and response biases of the SD effect develop across three age groups. Conventional analyses, in which previous stimulus and response biases were assessed separately, revealed significant changes in the biases over time. Previous stimulus bias shifted from repulsion to attraction, while previous response bias evolved from attraction to greater attraction. However, there was a strong correlation between stimulus and response orientations. Therefore, a generalized linear mixed-effects (GLME) analysis that simultaneously considered both previous stimulus and response, outperformed separate analyses. This revealed that previous stimulus and response resulted in two distinct biases with different developmental trajectories. The repulsion bias of previous stimulus remained relatively stable across all age groups, whereas the attraction bias of previous response was significantly stronger in adults than in children and adolescents. These findings demonstrate that the repulsion bias towards preceding stimuli is established early in the developing brain (at least by around 10 years old), while the attraction bias towards responses is not fully developed until adulthood. Our findings provide new insights into the development of the SD phenomenon and how humans integrate two opposing mechanisms into their perceptual responses to external input during development.
Collapse
Affiliation(s)
- Liqin Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Yujie Liu
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- State Key Laboratory of Brain and Cognitive Sciences, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Yuhan Jiang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Wenbo Wang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Pengfei Xu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Ke Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China.
| |
Collapse
|
18
|
Monk T, Dennler N, Ralph N, Rastogi S, Afshar S, Urbizagastegui P, Jarvis R, van Schaik A, Adamatzky A. Electrical Signaling Beyond Neurons. Neural Comput 2024; 36:1939-2029. [PMID: 39141803 DOI: 10.1162/neco_a_01696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/21/2024] [Indexed: 08/16/2024]
Abstract
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that "simpler" neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals-for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell's assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.
Collapse
Affiliation(s)
- Travis Monk
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Nik Dennler
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Nicholas Ralph
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Shavika Rastogi
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Saeed Afshar
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Russell Jarvis
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Andrew Adamatzky
- Unconventional Computing Laboratory, University of the West of England, Bristol BS16 1QY, U.K.
| |
Collapse
|
19
|
Clark DA, Fitzgerald JE. Optimization in Visual Motion Estimation. Annu Rev Vis Sci 2024; 10:23-46. [PMID: 38663426 PMCID: PMC11998607 DOI: 10.1146/annurev-vision-101623-025432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2025]
Abstract
Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.
Collapse
Affiliation(s)
- Damon A Clark
- Department of Molecular, Cellular, and Developmental Biology, Yale University, New Haven, Connecticut, USA;
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
20
|
Zeki S, Hale ZF, Beyh A, Rasche SE. Perceptual axioms are irreconcilable with Euclidean geometry. Eur J Neurosci 2024; 60:4217-4223. [PMID: 38803020 DOI: 10.1111/ejn.16430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 04/12/2024] [Accepted: 05/15/2024] [Indexed: 05/29/2024]
Abstract
There are different definitions of axioms, but the one that seems to have general approval is that axioms are statements whose truths are universally accepted but cannot be proven; they are the foundation from which further propositional truths are derived. Previous attempts, led by David Hilbert, to show that all of mathematics can be built into an axiomatic system that is complete and consistent failed when Kurt Gödel proved that there will always be statements which are known to be true but can never be proven within the same axiomatic system. But Gödel and his followers took no account of brain mechanisms that generate and mediate logic. In this largely theoretical paper, but backed by previous experiments and our new ones reported below, we show that in the case of so-called 'optical illusions', there exists a significant and irreconcilable difference between their visual perception and their description according to Euclidean geometry; when participants are asked to adjust, from an initial randomised state, the perceptual geometric axioms to conform to the Euclidean description, the two never match, although the degree of mismatch varies between individuals. These results provide evidence that perceptual axioms, or statements known to be perceptually true, cannot be described mathematically. Thus, the logic of the visual perceptual system is irreconcilable with the cognitive (mathematical) system and cannot be updated even when knowledge of the difference between the two is available. Hence, no one brain reality is more 'objective' than any other.
Collapse
Affiliation(s)
- Semir Zeki
- Laboratory of Neurobiology, University College London, London, UK
| | - Zachary F Hale
- Laboratory of Neurobiology, University College London, London, UK
| | - Ahmad Beyh
- Laboratory of Neurobiology, University College London, London, UK
| | - Samuel E Rasche
- Laboratory of Neurobiology, University College London, London, UK
| |
Collapse
|
21
|
Haynes JD, Gallagher M, Culling JF, Freeman TCA. The precision of signals encoding active self-movement. J Neurophysiol 2024; 132:389-402. [PMID: 38863427 DOI: 10.1152/jn.00370.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 06/07/2024] [Accepted: 06/07/2024] [Indexed: 06/13/2024] Open
Abstract
Everyday actions like moving the head, walking around, and grasping objects are typically self-controlled. This presents a problem when studying the signals encoding such actions because active self-movement is difficult to control experimentally. Available techniques demand repeatable trials, but each action is unique, making it difficult to measure fundamental properties like psychophysical thresholds. We present a novel paradigm that recovers both precision and bias of self-movement signals with minimal constraint on the participant. The paradigm relies on linking image motion to previous self-movement, and two experimental phases to extract the signal encoding the latter. The paradigm takes care of a hidden source of external noise not previously accounted for in techniques that link display motion to self-movement in real time (e.g., virtual reality). We use head rotations as an example of self-movement, and show that the precision of the signals encoding head movement depends on whether they are being used to judge visual motion or auditory motion. We find that perceived motion is slowed during head movement in both cases. The "nonimage" signals encoding active head rotation (motor commands, proprioception, and vestibular cues) are therefore biased toward lower speeds and/or displacements. In a second experiment, we trained participants to rotate their heads at different rates and found that the imprecision of the head rotation signal rises proportionally with head speed (Weber's law). We discuss the findings in terms of the different motion cues used by vision and hearing, and the implications they have for Bayesian models of motion perception.NEW & NOTEWORTHY We present a psychophysical technique for measuring the precision of signals encoding active self-movements. Using head movements, we show that 1) precision is greater when active head rotation is performed using visual comparison stimuli versus auditory; 2) precision decreases with head speed (Weber's law); 3) perceived speed is lower during head rotation. The findings may reflect the steps needed to convert different cues into common units, and challenge standard Bayesian models of motion perception.
Collapse
Affiliation(s)
- Joshua D Haynes
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Maria Gallagher
- School of Psychology, University of Kent, Canterbury, United Kingdom
| | - John F Culling
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Tom C A Freeman
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
22
|
Mazuz Y, Kessler Y, Ganel T. Age-related changes in the susceptibility to visual illusions of size. Sci Rep 2024; 14:14583. [PMID: 38918501 PMCID: PMC11199550 DOI: 10.1038/s41598-024-65405-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 06/19/2024] [Indexed: 06/27/2024] Open
Abstract
As the global population ages, understanding of the effect of aging on visual perception is of growing importance. This study investigates age-related changes in adulthood along size perception through the lens of three visual illusions: the Ponzo, Ebbinghaus, and Height-width illusions. Utilizing the Bayesian conceptualization of the aging brain, which posits increased reliance on prior knowledge with age, we explored potential differences in the susceptibility to visual illusions across different age groups in adults (ages 20-85 years). To this end, we used the BTPI (Ben-Gurion University Test for Perceptual Illusions), an online validated battery of visual illusions developed in our lab. The findings revealed distinct patterns of age-related changes for each of the illusions, challenging the idea of a generalized increase in reliance on prior knowledge with age. Specifically, we observed a systematic reduction in susceptibility to the Ebbinghaus illusion with age, while susceptibility to the Height-width illusion increased with age. As for the Ponzo illusion, there were no significant changes with age. These results underscore the complexity of age-related changes in visual perception and converge with previous findings to support the idea that different visual illusions of size are mediated by distinct perceptual mechanisms.
Collapse
Affiliation(s)
- Yarden Mazuz
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Yoav Kessler
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
23
|
Ryan CP, Ciotti S, Balestrucci P, Bicchi A, Lacquaniti F, Bianchi M, Moscatelli A. The relativity of reaching: Motion of the touched surface alters the trajectory of hand movements. iScience 2024; 27:109871. [PMID: 38784005 PMCID: PMC11112373 DOI: 10.1016/j.isci.2024.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 11/10/2023] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
For dexterous control of the hand, humans integrate sensory information and prior knowledge regarding their bodies and the world. We studied the role of touch in hand motor control by challenging a fundamental prior assumption-that self-motion of inanimate objects is unlikely upon contact. In a reaching task, participants slid their fingertips across a robotic interface, with their hand hidden from sight. Unbeknownst to the participants, the robotic interface remained static, followed hand movement, or moved in opposition to it. We considered two hypotheses. Either participants were able to account for surface motion or, if the stationarity assumption held, they would integrate the biased tactile cues and proprioception. Motor errors consistent with the latter hypothesis were observed. The role of visual feedback, tactile sensitivity, and friction was also investigated. Our study carries profound implications for human-machine collaboration in a world where objects may no longer conform to the stationarity assumption.
Collapse
Affiliation(s)
- Colleen P. Ryan
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Simone Ciotti
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Priscilla Balestrucci
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Antonio Bicchi
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
- Istituto Italiano di Tecnologia, 16163 Genova, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Matteo Bianchi
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| |
Collapse
|
24
|
Boundy-Singer ZM, Ziemba CM, Hénaff OJ, Goris RLT. How does V1 population activity inform perceptual certainty? J Vis 2024; 24:12. [PMID: 38884544 PMCID: PMC11185272 DOI: 10.1167/jov.24.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 05/06/2024] [Indexed: 06/18/2024] Open
Abstract
Neural population activity in sensory cortex informs our perceptual interpretation of the environment. Oftentimes, this population activity will support multiple alternative interpretations. The larger the spread of probability over different alternatives, the more uncertain the selected perceptual interpretation. We test the hypothesis that the reliability of perceptual interpretations can be revealed through simple transformations of sensory population activity. We recorded V1 population activity in fixating macaques while presenting oriented stimuli under different levels of nuisance variability and signal strength. We developed a decoding procedure to infer from V1 activity the most likely stimulus orientation as well as the certainty of this estimate. Our analysis shows that response magnitude, response dispersion, and variability in response gain all offer useful proxies for orientation certainty. Of these three metrics, the last one has the strongest association with the decoder's uncertainty estimates. These results clarify that the nature of neural population activity in sensory cortex provides downstream circuits with multiple options to assess the reliability of perceptual interpretations.
Collapse
Affiliation(s)
- Zoe M Boundy-Singer
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Corey M Ziemba
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | | | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
25
|
Charlton JA, Goris RLT. Abstract deliberation by visuomotor neurons in prefrontal cortex. Nat Neurosci 2024; 27:1167-1175. [PMID: 38684894 PMCID: PMC11156582 DOI: 10.1038/s41593-024-01635-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 03/29/2024] [Indexed: 05/02/2024]
Abstract
During visually guided behavior, the prefrontal cortex plays a pivotal role in mapping sensory inputs onto appropriate motor plans. When the sensory input is ambiguous, this involves deliberation. It is not known whether the deliberation is implemented as a competition between possible stimulus interpretations or between possible motor plans. Here we study neural population activity in the prefrontal cortex of macaque monkeys trained to flexibly report perceptual judgments of ambiguous visual stimuli. We find that the population activity initially represents the formation of a perceptual choice before transitioning into the representation of the motor plan. Stimulus strength and prior expectations both bear on the formation of the perceptual choice, but not on the formation of the action plan. These results suggest that prefrontal circuits involved in action selection are also used for the deliberation of abstract propositions divorced from a specific motor plan, thus providing a crucial mechanism for abstract reasoning.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Robbe L T Goris
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
26
|
Pickard K, Davidson MJ, Kim S, Alais D. Incongruent active head rotations increase visual motion detection thresholds. Neurosci Conscious 2024; 2024:niae019. [PMID: 38757119 PMCID: PMC11097904 DOI: 10.1093/nc/niae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/18/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024] Open
Abstract
Attributing a visual motion signal to its correct source-be that external object motion, self-motion, or some combination of both-seems effortless, and yet often involves disentangling a complex web of motion signals. Existing literature focuses on either translational motion (heading) or eye movements, leaving much to be learnt about the influence of a wider range of self-motions, such as active head rotations, on visual motion perception. This study investigated how active head rotations affect visual motion detection thresholds, comparing conditions where visual motion and head-turn direction were either congruent or incongruent. Participants judged the direction of a visual motion stimulus while rotating their head or remaining stationary, using a fixation-locked Virtual Reality display with integrated head-movement recordings. Thresholds to perceive visual motion were higher in both active-head rotation conditions compared to stationary, though no differences were found between congruent or incongruent conditions. Participants also showed a significant bias to report seeing visual motion travelling in the same direction as the head rotation. Together, these results demonstrate active head rotations increase visual motion perceptual thresholds, particularly in cases of incongruent visual and active vestibular stimulation.
Collapse
Affiliation(s)
- Kate Pickard
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Matthew J Davidson
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Sujin Kim
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
27
|
Chin BM, Wang M, Mikkelsen LT, Friedman CT, Ng CJ, Chu MA, Cooper EA. A paradigm for characterizing motion misperception in people with typical vision and low vision. Optom Vis Sci 2024; 101:252-262. [PMID: 38857038 DOI: 10.1097/opx.0000000000002139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2024] Open
Abstract
PURPOSE We aimed to develop a paradigm that can efficiently characterize motion percepts in people with low vision and compare their responses with well-known misperceptions made by people with typical vision when targets are hard to see. METHODS We recruited a small cohort of individuals with reduced acuity and contrast sensitivity (n = 5) as well as a comparison cohort with typical vision (n = 5) to complete a psychophysical study. Study participants were asked to judge the motion direction of a tilted rhombus that was either high or low contrast. In a series of trials, the rhombus oscillated vertically, horizontally, or diagonally. Participants indicated the perceived motion direction using a number wheel with 12 possible directions, and statistical tests were used to examine response biases. RESULTS All participants with typical vision showed systematic misperceptions well predicted by a Bayesian inference model. Specifically, their perception of vertical or horizontal motion was biased toward directions orthogonal to the long axis of the rhombus. They had larger biases for hard-to-see (low contrast) stimuli. Two participants with low vision had a similar bias, but with no difference between high- and low-contrast stimuli. The other participants with low vision were unbiased in their percepts or biased in the opposite direction. CONCLUSIONS Our results suggest that some people with low vision may misperceive motion in a systematic way similar to people with typical vision. However, we observed large individual differences. Future work will aim to uncover reasons for such differences and identify aspects of vision that predict susceptibility.
Collapse
Affiliation(s)
- Benjamin M Chin
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Minqi Wang
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Loganne T Mikkelsen
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Clara T Friedman
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Cherlyn J Ng
- Department of Cognitive Sciences, The University of California, Irvine, Irvine, California
| | - Marlena A Chu
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | | |
Collapse
|
28
|
Shaw S, Kilpatrick ZP. Representing stimulus motion with waves in adaptive neural fields. J Comput Neurosci 2024; 52:145-164. [PMID: 38607466 PMCID: PMC11802407 DOI: 10.1007/s10827-024-00869-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 04/13/2024]
Abstract
Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.
Collapse
Affiliation(s)
- Sage Shaw
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, USA
| | - Zachary P Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, USA.
- Institute for Cognitive Sciences, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
29
|
Sun Q, Wang JY, Gong XM. Conflicts between short- and long-term experiences affect visual perception through modulating sensory or motor response systems: Evidence from Bayesian inference models. Cognition 2024; 246:105768. [PMID: 38479091 DOI: 10.1016/j.cognition.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 03/24/2024]
Abstract
The independent effects of short- and long-term experiences on visual perception have been discussed for decades. However, no study has investigated whether and how these experiences simultaneously affect our visual perception. To address this question, we asked participants to estimate their self-motion directions (i.e., headings) simulated from optic flow, in which a long-term experience learned in everyday life (i.e., straight-forward motion being more common than lateral motion) plays an important role. The headings were selected from three distributions that resembled a peak, a hill, and a flat line, creating different short-term experiences. Importantly, the proportions of headings deviating from the straight-forward motion gradually increased in the peak, hill, and flat distributions, leading to a greater conflict between long- and short-term experiences. The results showed that participants biased their heading estimates towards the straight-ahead direction and previously seen headings, which increased with the growing experience conflict. This suggests that both long- and short-term experiences simultaneously affect visual perception. Finally, we developed two Bayesian models (Model 1 vs. Model 2) based on two assumptions that the experience conflict altered the likelihood distribution of sensory representation or the motor response system. The results showed that both models accurately predicted participants' estimation biases. However, Model 1 predicted a higher variance of serial dependence compared to Model 2, while Model 2 predicted a higher variance of the bias towards the straight-ahead direction compared to Model 1. This suggests that the experience conflict can influence visual perception by affecting both sensory and motor response systems. Taken together, the current study systematically revealed the effects of long- and short-term experiences on visual perception and the underlying Bayesian processing mechanisms.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China; Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, PR China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, PR China.
| | - Jing-Yi Wang
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Xiu-Mei Gong
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
30
|
Futagawa K, Ikeda H, Negishi L, Kurumizaka H, Yamamoto A, Furihata K, Ito Y, Ikeya T, Nagata K, Funabara D, Suzuki M. Structural and Functional Analysis of the Amorphous Calcium Carbonate-Binding Protein Paramyosin in the Shell of the Pearl Oyster, Pinctada fucata. LANGMUIR : THE ACS JOURNAL OF SURFACES AND COLLOIDS 2024; 40:8373-8392. [PMID: 38606767 DOI: 10.1021/acs.langmuir.3c03820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Amorphous calcium carbonate (ACC) is an important precursor phase for the formation of aragonite crystals in the shells of Pinctada fucata. To identify the ACC-binding protein in the inner aragonite layer of the shell, extracts from the shell were used in the ACC-binding experiments. Semiquantitative analyses using liquid chromatography-mass spectrometry revealed that paramyosin was strongly associated with ACC in the shell. We discovered that paramyosin, a major component of the adductor muscle, was included in the myostracum, which is the microstructure of the shell attached to the adductor muscle. Purified paramyosin accumulates calcium carbonate and induces the prism structure of aragonite crystals, which is related to the morphology of prism aragonite crystals in the myostracum. Nuclear magnetic resonance measurements revealed that the Glu-rich region was bound to ACC. Activity of the Glu-rich region was stronger than that of the Asp-rich region. These results suggest that paramyosin in the adductor muscle is involved in the formation of aragonite prisms in the myostracum.
Collapse
Affiliation(s)
- Kei Futagawa
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Haruka Ikeda
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Lumi Negishi
- Institute for Quantitative Biosciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Hitoshi Kurumizaka
- Institute for Quantitative Biosciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Ayame Yamamoto
- Graduate School of Bioresources, Mie University, Tsu, Mie 514-8507, Japan
| | - Kazuo Furihata
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Yutaka Ito
- Department of Chemistry, Tokyo Metropolitan University, 1-1 minami-Osawa, Hachioji, Tokyo 192-0397, Japan
| | - Teppei Ikeya
- Department of Chemistry, Tokyo Metropolitan University, 1-1 minami-Osawa, Hachioji, Tokyo 192-0397, Japan
| | - Koji Nagata
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Daisuke Funabara
- Graduate School of Bioresources, Mie University, Tsu, Mie 514-8507, Japan
| | - Michio Suzuki
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| |
Collapse
|
31
|
Kingdom FAA, Yakobi Y, Wang XC. Stereoscopic slant contrast revisited. J Vis 2024; 24:24. [PMID: 38683571 PMCID: PMC11059801 DOI: 10.1167/jov.24.4.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 03/16/2024] [Indexed: 05/01/2024] Open
Abstract
The perceived slant of a stereoscopic surface is altered by the presence of a surrounding surface, a phenomenon termed stereo slant contrast. Previous studies have shown that a slanted surround causes a fronto-parallel surface to appear slanted in the opposite direction, an instance of "bidirectional" contrast. A few studies have examined slant contrast using slanted as opposed to fronto-parallel test surfaces, and these also have shown slant contrast. Here, we use a matching method to examine slant contrast over a wide range of combinations of surround and test slants, one aim being to determine whether stereo slant contrast transfers across opposite directions of test and surround slant. We also examine the effect of the test on the perceived slant of the surround. Test slant contrast was found to be bidirectional in virtually all test-surround combinations and transferred across opposite test and surround slants, with little or no decline in magnitude as the test-surround slant difference approached the limit. There was a weak bidirectional effect of the test slant on the perceived slant of the surround. We consider how our results might be explained by four mechanisms: (a) normalization of stereo slant to vertical; (b) divisive normalization of stereo slant channels in a manner analogous to the tilt illusion; (c) interactions between center and surround disparity-gradient detectors; and (d) uncertainty in slant estimation. We conclude that the third of these (interactions between center and surround disparity-gradient detectors) is the most likely cause of stereo slant contrast.
Collapse
Affiliation(s)
- Frederick A A Kingdom
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| | - Yoel Yakobi
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| | - Xingao Clara Wang
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| |
Collapse
|
32
|
Hahn M, Wei XX. A unifying theory explains seemingly contradictory biases in perceptual estimation. Nat Neurosci 2024; 27:793-804. [PMID: 38360947 DOI: 10.1038/s41593-024-01574-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 01/08/2024] [Indexed: 02/17/2024]
Abstract
Perceptual biases are widely regarded as offering a window into the neural computations underlying perception. To understand these biases, previous work has proposed a number of conceptually different, and even seemingly contradictory, explanations, including attraction to a Bayesian prior, repulsion from the prior due to efficient coding and central tendency effects on a bounded range. We present a unifying Bayesian theory of biases in perceptual estimation derived from first principles. We demonstrate theoretically an additive decomposition of perceptual biases into attraction to a prior, repulsion away from regions with high encoding precision and regression away from the boundary. The results reveal a simple and universal rule for predicting the direction of perceptual biases. Our theory accounts for, and yields, new insights regarding biases in the perception of a variety of stimulus attributes, including orientation, color and magnitude. These results provide important constraints on the neural implementations of Bayesian computations.
Collapse
Affiliation(s)
| | - Xue-Xin Wei
- Department of Neuroscience, Department of Psychology, Center for Perceptual Systems, Center for Learning and Memory, Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
33
|
Maruya A, Zaidi Q. Perceptual transitions between object rigidity and non-rigidity: Competition and cooperation among motion energy, feature tracking, and shape-based priors. J Vis 2024; 24:3. [PMID: 38306112 PMCID: PMC10848565 DOI: 10.1167/jov.24.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/20/2023] [Indexed: 02/03/2024] Open
Abstract
Why do moving objects appear rigid when projected retinal images are deformed non-rigidly? We used rotating rigid objects that can appear rigid or non-rigid to test whether shape features contribute to rigidity perception. When two circular rings were rigidly linked at an angle and jointly rotated at moderate speeds, observers reported that the rings wobbled and were not linked rigidly, but rigid rotation was reported at slow speeds. When gaps, paint, or vertices were added, the rings appeared rigidly rotating even at moderate speeds. At high speeds, all configurations appeared non-rigid. Salient features thus contribute to rigidity at slow and moderate speeds but not at high speeds. Simulated responses of arrays of motion-energy cells showed that motion flow vectors are predominantly orthogonal to the contours of the rings, not parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation gave a high probability of wobbling for the motion-energy flows. However, the convolutional neural network gave high probabilities of rotation for motion flows generated by tracking features with arrays of MT pattern-motion cells and corner detectors. In addition, circular rings can appear to spin and roll despite the absence of any sensory evidence, and this illusion is prevented by vertices, gaps, and painted segments, showing the effects of rotational symmetry and shape. Combining convolutional neural network outputs that give greater weight to motion energy at fast speeds and to feature tracking at slow speeds, with the shape-based priors for wobbling and rolling, explained rigid and non-rigid percepts across shapes and speeds (R2 = 0.95). The results demonstrate how cooperation and competition between different neuronal classes lead to specific states of visual perception and to transitions between the states.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| |
Collapse
|
34
|
Manavalan M, Song X, Nolte T, Fonagy P, Montague PR, Vilares I. Bayesian Decision-Making Under Uncertainty in Borderline Personality Disorder. J Pers Disord 2024; 38:53-74. [PMID: 38324252 DOI: 10.1521/pedi.2024.38.1.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Bayesian decision theory suggests that optimal decision-making should use and weigh prior beliefs with current information, according to their relative uncertainties. However, some characteristics of borderline personality disorder (BPD) patients, such as fast, drastic changes in the overall perception of themselves and others, suggest they may be underrelying on priors. Here, we investigated if BPD patients have a general deficit in relying on or combining prior with current information. We analyzed this by having BPD patients (n = 23) and healthy controls (n = 18) perform a coin-catching sensorimotor task with varying levels of prior and current information uncertainty. Our results indicate that BPD patients learned and used prior information and combined it with current information in a qualitatively Bayesian-like way. Our results show that, at least in a lower-level, nonsocial sensorimotor task, BPD patients can appropriately use both prior and current information, illustrating that potential deficits using priors may not be widespread or domain-general.
Collapse
Affiliation(s)
- Mathi Manavalan
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Xin Song
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Tobias Nolte
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Anna Freud National Centre for Children and Families, London, U.K
| | - Peter Fonagy
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Anna Freud National Centre for Children and Families, London, U.K
| | - P Read Montague
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Fralin Biomedical Research Institute at VTC, Virginia Polytechnic Institute and State University, Roanoke, Virginia
- Department of Physics, Virginia Polytechnic Institute and State University, Blacksburg, Virginia
| | - Iris Vilares
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
35
|
Luo R, Mai X, Meng J. Effect of motion state variability on error-related potentials during continuous feedback paradigms and their consequences for classification. J Neurosci Methods 2024; 401:109982. [PMID: 37839711 DOI: 10.1016/j.jneumeth.2023.109982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/11/2023] [Accepted: 10/11/2023] [Indexed: 10/17/2023]
Abstract
BACKGROUND An erroneous motion would elicit the error-related potential (ErrP) when humans monitor the behavior of the external devices. This EEG modality has been largely applied to brain-computer interface in an active or passive manner with discrete visual feedback. However, the effect of variable motion state on ErrP morphology and classification performance raises concerns when the interaction is conducted with continuous visual feedback. NEW METHOD In the present study, we designed a cursor control experiment. Participants monitored a continuously moving cursor to reach the target on one side of the screen. Motion state varied multiple times with two factors: (1) motion direction and (2) motion speed. The effects of these two factors on the morphological characteristics and classification performance of ErrP were analyzed. Furthermore, an offline simulation was performed to evaluate the effectiveness of the proposed extended ErrP-decoder in resolving the interference by motion direction changes. RESULTS The statistical analyses revealed that motion direction and motion speed significantly influenced the amplitude of feedback-ERN and frontal-Pe components, while only motion direction significantly affected the classification performance. COMPARISON WITH EXISTING METHODS Significant deviation was found in ErrP detection utilizing classical correct-versus-erroneous event training. However, this bias can be alleviated by 16% by the extended ErrP-decoder. CONCLUSION The morphology and classification performance of ErrP signal can be affected by motion state variability during continuous feedback paradigms. The results enhance the comprehension of ErrP morphological components and shed light on the detection of BCI's error behavior in practical continuous control.
Collapse
Affiliation(s)
- Ruijie Luo
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ximing Mai
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Meng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
36
|
Casartelli L, Maronati C, Cavallo A. From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability. Phys Life Rev 2023; 47:245-263. [PMID: 37976727 DOI: 10.1016/j.plrev.2023.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023]
Abstract
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differentiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
Collapse
Affiliation(s)
- Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Italy
| | - Camilla Maronati
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy
| | - Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
37
|
Sun Q, Gong XM, Zhan LZ, Wang SY, Dong LL. Serial dependence bias can predict the overall estimation error in visual perception. J Vis 2023; 23:2. [PMID: 37917052 PMCID: PMC10627302 DOI: 10.1167/jov.23.13.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023] Open
Abstract
Although visual feature estimations are accurate and precise, overall estimation errors (i.e., the difference between estimates and actual values) tend to show systematic patterns. For example, estimates of orientations are systematically biased away from horizontal and vertical orientations, showing an oblique illusion. Additionally, many recent studies have demonstrated that estimations of current visual features are systematically biased toward previously seen features, showing a serial dependence. However, no study examined whether the overall estimation errors were correlated with the serial dependence bias. To address this question, we enrolled three groups of participants to estimate orientation, motion speed, and point-light-walker direction. The results showed that the serial dependence bias explained over 20% of overall estimation errors in the three tasks, indicating that we could use the serial dependence bias to predict the overall estimation errors. The current study first demonstrated that the serial dependence bias was not independent from the overall estimation errors. This finding could inspire researchers to investigate the neural bases underlying the visual feature estimation and serial dependence.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China, PRC
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Si-Yu Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | | |
Collapse
|
38
|
Lee ARI, Wilcox LM, Allison RS. Perceiving depth and motion in depth from successive occlusion. J Vis 2023; 23:2. [PMID: 37796523 PMCID: PMC10561775 DOI: 10.1167/jov.23.12.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Occlusion, or interposition, is one of the strongest and best-known pictorial cues to depth. Furthermore, the successive occlusions of previous objects by newly presented objects produces an impression of increasing depth. Although the perceived motion associated with this illusion has been studied, the depth percept has not. To investigate, participants were presented with two piles of disks with one always static and the other either a static pile or a stacking pile where a new disk was added every 200 ms. We found static piles with equal number of disks appeared equal in height. In contrast, the successive presentation of disks in the stacking condition appeared to enhance the perceived height of the stack-fewer disks were needed to match the static pile. Surprisingly, participants were also more precise when comparing stacking versus static piles of disks. Reversing the stacking by removing rather than adding disks reversed the bias and degraded precision. In follow-up experiments, we used nonoverlapping static and dynamic configurations to show that the effects are not due to simple differences in perceived numerosity. In sum, our results show that successive occlusions generate a greater sense of height than occlusion alone, and we posit that dynamic occlusion may be an underappreciated source of depth information.
Collapse
Affiliation(s)
- Abigail R I Lee
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Laurie M Wilcox
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Robert S Allison
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
39
|
Quillien T, Tooby J, Cosmides L. Rational inferences about social valuation. Cognition 2023; 239:105566. [PMID: 37499313 DOI: 10.1016/j.cognition.2023.105566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 06/20/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023]
Abstract
The decisions made by other people can contain information about the value they assign to our welfare-for example how much they are willing to sacrifice to make us better off. An emerging body of research suggests that we extract and use this information, responding more favorably to those who sacrifice more even if they provide us with less. The magnitude of their trade-offs governs our social responses to them-including partner choice, giving, and anger. This implies that people have well-designed cognitive mechanisms for estimating the weight someone else assigns to their welfare, even when the amounts at stake vary and the information is noisy or sparse. We tested this hypothesis in two studies (N=200; US samples) by asking participants to observe a partner make two trade-offs, and then predict the partner's decisions in other trials. Their predictions were compared to those of a model that uses statistically optimal procedures, operationalized as a Bayesian ideal observer. As predicted, (i) the estimates people made from sparse evidence matched those of the ideal observer, and (ii) lower welfare trade-offs elicited more anger from participants, even when their total payoffs were held constant. These results support the view that people efficiently update their representations of how much others value them. They also provide the most direct test to date of a key assumption of the recalibrational theory of anger: that anger is triggered by cues of low valuation, not by the infliction of costs.
Collapse
Affiliation(s)
- Tadeg Quillien
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Psychological & Brain Sciences, University of California, Santa Barbara, United States of America.
| | - John Tooby
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Anthropology, University of California, Santa Barbara, United States of America
| | - Leda Cosmides
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Psychological & Brain Sciences, University of California, Santa Barbara, United States of America
| |
Collapse
|
40
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
41
|
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
42
|
Fulvio JM, Rokers B, Samaha J. Task feedback suggests a post-perceptual component to serial dependence. J Vis 2023; 23:6. [PMID: 37682557 PMCID: PMC10500366 DOI: 10.1167/jov.23.10.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/14/2023] [Indexed: 09/09/2023] Open
Abstract
Decisions across a range of perceptual tasks are biased toward past stimuli. Such serial dependence is thought to be an adaptive low-level mechanism that promotes perceptual stability across time. However, recent studies suggest post-perceptual mechanisms may also contribute to serially biased responses, calling into question a single locus of serial dependence and the nature of integration of past and present sensory inputs. We measured serial dependence in the context of a three-dimensional (3D) motion perception task where uncertainty in the sensory information varied substantially from trial to trial. We found that serial dependence varied with stimulus properties that impact sensory uncertainty on the current trial. Reduced stimulus contrast was associated with an increased bias toward the stimulus direction of the previous trial. Critically, performance feedback, which reduced sensory uncertainty, abolished serial dependence. These results provide clear evidence for a post-perceptual locus of serial dependence in 3D motion perception and support the role of serial dependence as a response strategy in the face of substantial sensory uncertainty.
Collapse
Affiliation(s)
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | - Jason Samaha
- Department of Psychology, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
43
|
Yu K, Tuerlinckx F, Vanpaemel W, Zaman J. Humans display interindividual differences in the latent mechanisms underlying fear generalization behaviour. COMMUNICATIONS PSYCHOLOGY 2023; 1:5. [PMID: 39242719 PMCID: PMC11290606 DOI: 10.1038/s44271-023-00005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/13/2023] [Indexed: 09/09/2024]
Abstract
Human generalization research aims to understand the processes underlying the transfer of prior experiences to new contexts. Generalization research predominantly relies on descriptive statistics, assumes a single generalization mechanism, interprets generalization from mono-source data, and disregards individual differences. Unfortunately, such an approach fails to disentangle various mechanisms underlying generalization behaviour and can readily result in biased conclusions regarding generalization tendencies. Therefore, we combined a computational model with multi-source data to mechanistically investigate human generalization behaviour. By simultaneously modelling learning, perceptual and generalization data at the individual level, we revealed meaningful variations in how different mechanisms contribute to generalization behaviour. The current research suggests the need for revising the theoretical and analytic foundations in the field to shift the attention away from forecasting group-level generalization behaviour and toward understanding how such phenomena emerge at the individual level. This raises the question for future research whether a mechanism-specific differential diagnosis may be beneficial for generalization-related psychiatric disorders.
Collapse
Affiliation(s)
| | | | | | - Jonas Zaman
- KU Leuven, Leuven, Belgium
- University of Hasselt, Hasselt, Belgium
| |
Collapse
|
44
|
Charlton JA, Młynarski WF, Bai YH, Hermundstad AM, Goris RLT. Environmental dynamics shape perceptual decision bias. PLoS Comput Biol 2023; 19:e1011104. [PMID: 37289753 DOI: 10.1371/journal.pcbi.1011104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/13/2023] [Indexed: 06/10/2023] Open
Abstract
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | | | - Yoon H Bai
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
45
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:e82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Jonathan Matthis
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Kathryn Bonnen
- School of Optometry, Indiana UniversityBloomingtonUnited States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| |
Collapse
|
46
|
Menceloglu M, Song JH. Motion duration is overestimated behind an occluder in action and perception tasks. J Vis 2023; 23:11. [PMID: 37171804 PMCID: PMC10184779 DOI: 10.1167/jov.23.5.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023] Open
Abstract
Motion estimation behind an occluder is a common task in situations like crossing the street or passing another car. People tend to overestimate the duration of an object's motion when it gets occluded for subsecond motion durations. Here, we explored (a) whether this bias depended on the type of interceptive action: discrete keypress versus continuous reach and (b) whether it was present in a perception task without an interceptive action. We used a prediction-motion task and presented a bar moving across the screen with a constant velocity that later became occluded. In the action task, participants stopped the occluded bar when they thought the bar reached the goal position via keypress or reach. They were more likely to stop the bar after it passed the goal position regardless of the action type, suggesting that the duration of occluded motion was overestimated (or its speed was underestimated). In the perception task, where participants judged whether a tone was presented before or after the bar reached the goal position, a similar bias was observed. In both tasks, the bias was near constant across motion durations and directions and grew over trials. We speculate that this robust bias may be due to a temporal illusion, Bayesian slow-motion prior, or the processing of the visible-occluded boundary crossing. Understanding its exact mechanism, the conditions on which it depends, and the relative roles of speed and time perception requires further research.
Collapse
Affiliation(s)
- Melisa Menceloglu
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
47
|
Khayat N, Ahissar M, Hochstein S. Perceptual history biases in serial ensemble representation. J Vis 2023; 23:7. [PMID: 36920389 PMCID: PMC10029768 DOI: 10.1167/jov.23.3.7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/03/2023] [Indexed: 03/16/2023] Open
Abstract
Ensemble perception refers to the visual system's ability to efficiently represent groups of similar objects as a unified percept using their summary statistical information. Most studies focused on extraction of current trial averages, giving little attention to prior experience effects, although a few recent studies found that ensemble mean estimations contract toward previously presented stimuli, with most of these focusing on explicit perceptual averaging of simultaneously presented item ensembles. Yet, the time element is crucial in real dynamic environments, where we encounter ensemble items over time, aggregating information until reaching summary representations. Moreover, statistical information of objects and scenes is learned over time and often implicitly and then used for predictions that shape perception, promoting environmental stability. Therefore, we now focus on temporal aspects of ensemble statistics and test whether prior information, beyond the current trial, biases implicit perceptual decisions. We designed methods to separate current trial biases from those of previously seen trial ensembles. In each trial, six circles of different sizes were presented serially, followed by two test items. Participants were asked to choose which was present in the sequence. Participants unconsciously rely on ensemble statistics, choosing stimuli closer to the ensemble mean. To isolate the influence of earlier trials, the two test items were sometimes equidistant from the current trial mean. Results showed membership judgment biases toward current trial mean, when informative (largest effect). On equidistant trials, judgments were biased toward previously experienced stimulus statistics. Comparison of similar conditions with a shifted stimulus distribution ruled out a bias toward an earlier, presession, prototypical diameter. We conclude that ensemble perception, even for temporally experienced ensembles, is influenced not only by current trial mean but also by means of recently seen ensembles and that these influences are somewhat correlated on a participant-by-participant basis.
Collapse
Affiliation(s)
- Noam Khayat
- ELSC Edmond & Lily Safra Center for Brain Research & Life Sciences Institute, Hebrew University, Jerusalem, Israel
| | - Merav Ahissar
- ELSC Edmond & Lily Safra Center for Brain Research & Psychology Department, Hebrew University, Jerusalem, Israel
| | - Shaul Hochstein
- ELSC Edmond & Lily Safra Center for Brain Research & Life Sciences Institute, Hebrew University, Jerusalem, Israel
| |
Collapse
|
48
|
Korai Y, Miura K. A dynamical model of visual motion processing for arbitrary stimuli including type II plaids. Neural Netw 2023; 162:46-68. [PMID: 36878170 DOI: 10.1016/j.neunet.2023.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 02/23/2023] [Accepted: 02/25/2023] [Indexed: 03/04/2023]
Abstract
To explore the operating principle of visual motion processing in the brain underlying perception and eye movements, we model the information processing of velocity estimate of the visual stimulus at the algorithmic level using the dynamical system approach. In this study, we formulate the model as an optimization process of an appropriately defined objective function. The model is applicable to arbitrary visual stimuli. We find that our theoretical predictions qualitatively agree with time evolution of eye movement reported by previous works across various types of stimulus. Our results suggest that the brain implements the present framework as the internal model of motion vision. We anticipate our model to be a promising building block for more profound understanding of visual motion processing as well as for the development of robotics.
Collapse
Affiliation(s)
- Yusuke Korai
- Integrated Clinical Education Center, Kyoto University Hospital, Kyoto University, Kyoto 606-8507, Japan.
| | - Kenichiro Miura
- Graduate School of Medicine, Kyoto University, Kyoto 606-8501, Japan; Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Tokyo 187-8551, Japan.
| |
Collapse
|
49
|
Recurrent networks endowed with structural priors explain suboptimal animal behavior. Curr Biol 2023; 33:622-638.e7. [PMID: 36657448 DOI: 10.1016/j.cub.2022.12.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/03/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2023]
Abstract
The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.
Collapse
|
50
|
Priming of probabilistic attentional templates. Psychon Bull Rev 2023; 30:22-39. [PMID: 35831678 DOI: 10.3758/s13423-022-02125-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Attentional priming has a dominating influence on vision, speeding visual search, releasing items from crowding, reducing masking effects, and during free-choice, primed targets are chosen over unprimed ones. Many accounts postulate that templates stored in working memory control what we attend to and mediate the priming. But what is the nature of these templates (or representations)? Analyses of real-world visual scenes suggest that tuning templates to exact color or luminance values would be impractical since those can vary greatly because of changes in environmental circumstances and perceptual interpretation. Tuning templates to a range of the most probable values would be more efficient. Recent evidence does indeed suggest that the visual system represents such probability, gradually encoding statistical variation in the environment through repeated exposure to input statistics. This is consistent with evidence from neurophysiology and theoretical neuroscience as well as computational evidence of probabilistic representations in visual perception. I argue that such probabilistic representations are the unit of attentional priming and that priming of, say, a repeated single-color value simply involves priming of a distribution with no variance. This "priming of probability" view can be modelled within a Bayesian framework where priming provides contextual priors. Priming can therefore be thought of as learning of the underlying probability density function of the target or distractor sets in a given continuous task.
Collapse
|