1
|
Cusimano M, Hewitt LB, McDermott JH. Listening with generative models. Cognition 2024; 253:105874. [PMID: 39216190 DOI: 10.1016/j.cognition.2024.105874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 03/31/2024] [Accepted: 07/03/2024] [Indexed: 09/04/2024]
Abstract
Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled 'rich falsification', revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.
Collapse
Affiliation(s)
- Maddie Cusimano
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America.
| | - Luke B Hewitt
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America; McGovern Institute, Massachusetts Institute of Technology, United States of America; Center for Brains Minds and Machines, Massachusetts Institute of Technology, United States of America; Speech and Hearing Bioscience and Technology, Harvard University, United States of America.
| |
Collapse
|
2
|
Negen J. No evidence for a difference in Bayesian reasoning for egocentric versus allocentric spatial cognition. PLoS One 2024; 19:e0312018. [PMID: 39388501 PMCID: PMC11466427 DOI: 10.1371/journal.pone.0312018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 09/30/2024] [Indexed: 10/12/2024] Open
Abstract
Bayesian reasoning (i.e. prior integration, cue combination, and loss minimization) has emerged as a prominent model for some kinds of human perception and cognition. The major theoretical issue is that we do not yet have a robust way to predict when we will or will not observe Bayesian effects in human performance. Here we tested a proposed divide in terms of Bayesian reasoning for egocentric spatial cognition versus allocentric spatial cognition (self-centered versus world-centred). The proposal states that people will show stronger Bayesian reasoning effects when it is possible to perform the Bayesian calculations within the egocentric frame, as opposed to requiring an allocentric frame. Three experiments were conducted with one egocentric-allowing condition and one allocentric-requiring condition but otherwise matched as closely as possible. No difference was found in terms of prior integration (Experiment 1), cue combination (Experiment 2), or loss minimization (Experiment 3). The contrast in previous reports, where Bayesian effects are present in many egocentric-allowing tasks while they are absent in many allocentric-requiring tasks, is likely due to other differences between the tasks-for example, the way allocentric-requiring tasks are often more complex and memory intensive.
Collapse
Affiliation(s)
- James Negen
- Psychology Department, Liverpool John Moores University, Liverpool, United Kingdom
| |
Collapse
|
3
|
Zhou L, Liu Y, Jiang Y, Wang W, Xu P, Zhou K. The distinct development of stimulus and response serial dependence. Psychon Bull Rev 2024; 31:2137-2147. [PMID: 38379075 PMCID: PMC11543724 DOI: 10.3758/s13423-024-02474-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/04/2024] [Indexed: 02/22/2024]
Abstract
Serial dependence (SD) is a phenomenon wherein current perceptions are biased by the previous stimulus and response. This helps to attenuate perceptual noise and variability in sensory input and facilitates stable ongoing perceptions of the environment. However, little is known about the developmental trajectory of SD. This study investigates how the stimulus and response biases of the SD effect develop across three age groups. Conventional analyses, in which previous stimulus and response biases were assessed separately, revealed significant changes in the biases over time. Previous stimulus bias shifted from repulsion to attraction, while previous response bias evolved from attraction to greater attraction. However, there was a strong correlation between stimulus and response orientations. Therefore, a generalized linear mixed-effects (GLME) analysis that simultaneously considered both previous stimulus and response, outperformed separate analyses. This revealed that previous stimulus and response resulted in two distinct biases with different developmental trajectories. The repulsion bias of previous stimulus remained relatively stable across all age groups, whereas the attraction bias of previous response was significantly stronger in adults than in children and adolescents. These findings demonstrate that the repulsion bias towards preceding stimuli is established early in the developing brain (at least by around 10 years old), while the attraction bias towards responses is not fully developed until adulthood. Our findings provide new insights into the development of the SD phenomenon and how humans integrate two opposing mechanisms into their perceptual responses to external input during development.
Collapse
Affiliation(s)
- Liqin Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Yujie Liu
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- State Key Laboratory of Brain and Cognitive Sciences, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Yuhan Jiang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Wenbo Wang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Pengfei Xu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Ke Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, China.
| |
Collapse
|
4
|
Monk T, Dennler N, Ralph N, Rastogi S, Afshar S, Urbizagastegui P, Jarvis R, van Schaik A, Adamatzky A. Electrical Signaling Beyond Neurons. Neural Comput 2024; 36:1939-2029. [PMID: 39141803 DOI: 10.1162/neco_a_01696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/21/2024] [Indexed: 08/16/2024]
Abstract
Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that "simpler" neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals-for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell's assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.
Collapse
Affiliation(s)
- Travis Monk
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Nik Dennler
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Nicholas Ralph
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Shavika Rastogi
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
- Biocomputation Group, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, U.K.
| | - Saeed Afshar
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Russell Jarvis
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Sydney, NSW 2747, Australia
| | - Andrew Adamatzky
- Unconventional Computing Laboratory, University of the West of England, Bristol BS16 1QY, U.K.
| |
Collapse
|
5
|
Zeki S, Hale ZF, Beyh A, Rasche SE. Perceptual axioms are irreconcilable with Euclidean geometry. Eur J Neurosci 2024; 60:4217-4223. [PMID: 38803020 DOI: 10.1111/ejn.16430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 04/12/2024] [Accepted: 05/15/2024] [Indexed: 05/29/2024]
Abstract
There are different definitions of axioms, but the one that seems to have general approval is that axioms are statements whose truths are universally accepted but cannot be proven; they are the foundation from which further propositional truths are derived. Previous attempts, led by David Hilbert, to show that all of mathematics can be built into an axiomatic system that is complete and consistent failed when Kurt Gödel proved that there will always be statements which are known to be true but can never be proven within the same axiomatic system. But Gödel and his followers took no account of brain mechanisms that generate and mediate logic. In this largely theoretical paper, but backed by previous experiments and our new ones reported below, we show that in the case of so-called 'optical illusions', there exists a significant and irreconcilable difference between their visual perception and their description according to Euclidean geometry; when participants are asked to adjust, from an initial randomised state, the perceptual geometric axioms to conform to the Euclidean description, the two never match, although the degree of mismatch varies between individuals. These results provide evidence that perceptual axioms, or statements known to be perceptually true, cannot be described mathematically. Thus, the logic of the visual perceptual system is irreconcilable with the cognitive (mathematical) system and cannot be updated even when knowledge of the difference between the two is available. Hence, no one brain reality is more 'objective' than any other.
Collapse
Affiliation(s)
- Semir Zeki
- Laboratory of Neurobiology, University College London, London, UK
| | - Zachary F Hale
- Laboratory of Neurobiology, University College London, London, UK
| | - Ahmad Beyh
- Laboratory of Neurobiology, University College London, London, UK
| | - Samuel E Rasche
- Laboratory of Neurobiology, University College London, London, UK
| |
Collapse
|
6
|
Mazuz Y, Kessler Y, Ganel T. Age-related changes in the susceptibility to visual illusions of size. Sci Rep 2024; 14:14583. [PMID: 38918501 PMCID: PMC11199550 DOI: 10.1038/s41598-024-65405-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 06/19/2024] [Indexed: 06/27/2024] Open
Abstract
As the global population ages, understanding of the effect of aging on visual perception is of growing importance. This study investigates age-related changes in adulthood along size perception through the lens of three visual illusions: the Ponzo, Ebbinghaus, and Height-width illusions. Utilizing the Bayesian conceptualization of the aging brain, which posits increased reliance on prior knowledge with age, we explored potential differences in the susceptibility to visual illusions across different age groups in adults (ages 20-85 years). To this end, we used the BTPI (Ben-Gurion University Test for Perceptual Illusions), an online validated battery of visual illusions developed in our lab. The findings revealed distinct patterns of age-related changes for each of the illusions, challenging the idea of a generalized increase in reliance on prior knowledge with age. Specifically, we observed a systematic reduction in susceptibility to the Ebbinghaus illusion with age, while susceptibility to the Height-width illusion increased with age. As for the Ponzo illusion, there were no significant changes with age. These results underscore the complexity of age-related changes in visual perception and converge with previous findings to support the idea that different visual illusions of size are mediated by distinct perceptual mechanisms.
Collapse
Affiliation(s)
- Yarden Mazuz
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Yoav Kessler
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
7
|
Ryan CP, Ciotti S, Balestrucci P, Bicchi A, Lacquaniti F, Bianchi M, Moscatelli A. The relativity of reaching: Motion of the touched surface alters the trajectory of hand movements. iScience 2024; 27:109871. [PMID: 38784005 PMCID: PMC11112373 DOI: 10.1016/j.isci.2024.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 11/10/2023] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
For dexterous control of the hand, humans integrate sensory information and prior knowledge regarding their bodies and the world. We studied the role of touch in hand motor control by challenging a fundamental prior assumption-that self-motion of inanimate objects is unlikely upon contact. In a reaching task, participants slid their fingertips across a robotic interface, with their hand hidden from sight. Unbeknownst to the participants, the robotic interface remained static, followed hand movement, or moved in opposition to it. We considered two hypotheses. Either participants were able to account for surface motion or, if the stationarity assumption held, they would integrate the biased tactile cues and proprioception. Motor errors consistent with the latter hypothesis were observed. The role of visual feedback, tactile sensitivity, and friction was also investigated. Our study carries profound implications for human-machine collaboration in a world where objects may no longer conform to the stationarity assumption.
Collapse
Affiliation(s)
- Colleen P. Ryan
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Simone Ciotti
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Priscilla Balestrucci
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Antonio Bicchi
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
- Istituto Italiano di Tecnologia, 16163 Genova, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| | - Matteo Bianchi
- Research Centre E. Piaggio and Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| |
Collapse
|
8
|
Boundy-Singer ZM, Ziemba CM, Hénaff OJ, Goris RLT. How does V1 population activity inform perceptual certainty? J Vis 2024; 24:12. [PMID: 38884544 PMCID: PMC11185272 DOI: 10.1167/jov.24.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 05/06/2024] [Indexed: 06/18/2024] Open
Abstract
Neural population activity in sensory cortex informs our perceptual interpretation of the environment. Oftentimes, this population activity will support multiple alternative interpretations. The larger the spread of probability over different alternatives, the more uncertain the selected perceptual interpretation. We test the hypothesis that the reliability of perceptual interpretations can be revealed through simple transformations of sensory population activity. We recorded V1 population activity in fixating macaques while presenting oriented stimuli under different levels of nuisance variability and signal strength. We developed a decoding procedure to infer from V1 activity the most likely stimulus orientation as well as the certainty of this estimate. Our analysis shows that response magnitude, response dispersion, and variability in response gain all offer useful proxies for orientation certainty. Of these three metrics, the last one has the strongest association with the decoder's uncertainty estimates. These results clarify that the nature of neural population activity in sensory cortex provides downstream circuits with multiple options to assess the reliability of perceptual interpretations.
Collapse
Affiliation(s)
- Zoe M Boundy-Singer
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Corey M Ziemba
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | | | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
9
|
Charlton JA, Goris RLT. Abstract deliberation by visuomotor neurons in prefrontal cortex. Nat Neurosci 2024; 27:1167-1175. [PMID: 38684894 PMCID: PMC11156582 DOI: 10.1038/s41593-024-01635-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 03/29/2024] [Indexed: 05/02/2024]
Abstract
During visually guided behavior, the prefrontal cortex plays a pivotal role in mapping sensory inputs onto appropriate motor plans. When the sensory input is ambiguous, this involves deliberation. It is not known whether the deliberation is implemented as a competition between possible stimulus interpretations or between possible motor plans. Here we study neural population activity in the prefrontal cortex of macaque monkeys trained to flexibly report perceptual judgments of ambiguous visual stimuli. We find that the population activity initially represents the formation of a perceptual choice before transitioning into the representation of the motor plan. Stimulus strength and prior expectations both bear on the formation of the perceptual choice, but not on the formation of the action plan. These results suggest that prefrontal circuits involved in action selection are also used for the deliberation of abstract propositions divorced from a specific motor plan, thus providing a crucial mechanism for abstract reasoning.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Robbe L T Goris
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
10
|
Pickard K, Davidson MJ, Kim S, Alais D. Incongruent active head rotations increase visual motion detection thresholds. Neurosci Conscious 2024; 2024:niae019. [PMID: 38757119 PMCID: PMC11097904 DOI: 10.1093/nc/niae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/18/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024] Open
Abstract
Attributing a visual motion signal to its correct source-be that external object motion, self-motion, or some combination of both-seems effortless, and yet often involves disentangling a complex web of motion signals. Existing literature focuses on either translational motion (heading) or eye movements, leaving much to be learnt about the influence of a wider range of self-motions, such as active head rotations, on visual motion perception. This study investigated how active head rotations affect visual motion detection thresholds, comparing conditions where visual motion and head-turn direction were either congruent or incongruent. Participants judged the direction of a visual motion stimulus while rotating their head or remaining stationary, using a fixation-locked Virtual Reality display with integrated head-movement recordings. Thresholds to perceive visual motion were higher in both active-head rotation conditions compared to stationary, though no differences were found between congruent or incongruent conditions. Participants also showed a significant bias to report seeing visual motion travelling in the same direction as the head rotation. Together, these results demonstrate active head rotations increase visual motion perceptual thresholds, particularly in cases of incongruent visual and active vestibular stimulation.
Collapse
Affiliation(s)
- Kate Pickard
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Matthew J Davidson
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Sujin Kim
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
11
|
Chin BM, Wang M, Mikkelsen LT, Friedman CT, Ng CJ, Chu MA, Cooper EA. A paradigm for characterizing motion misperception in people with typical vision and low vision. Optom Vis Sci 2024; 101:252-262. [PMID: 38857038 DOI: 10.1097/opx.0000000000002139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2024] Open
Abstract
PURPOSE We aimed to develop a paradigm that can efficiently characterize motion percepts in people with low vision and compare their responses with well-known misperceptions made by people with typical vision when targets are hard to see. METHODS We recruited a small cohort of individuals with reduced acuity and contrast sensitivity (n = 5) as well as a comparison cohort with typical vision (n = 5) to complete a psychophysical study. Study participants were asked to judge the motion direction of a tilted rhombus that was either high or low contrast. In a series of trials, the rhombus oscillated vertically, horizontally, or diagonally. Participants indicated the perceived motion direction using a number wheel with 12 possible directions, and statistical tests were used to examine response biases. RESULTS All participants with typical vision showed systematic misperceptions well predicted by a Bayesian inference model. Specifically, their perception of vertical or horizontal motion was biased toward directions orthogonal to the long axis of the rhombus. They had larger biases for hard-to-see (low contrast) stimuli. Two participants with low vision had a similar bias, but with no difference between high- and low-contrast stimuli. The other participants with low vision were unbiased in their percepts or biased in the opposite direction. CONCLUSIONS Our results suggest that some people with low vision may misperceive motion in a systematic way similar to people with typical vision. However, we observed large individual differences. Future work will aim to uncover reasons for such differences and identify aspects of vision that predict susceptibility.
Collapse
Affiliation(s)
- Benjamin M Chin
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Minqi Wang
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Loganne T Mikkelsen
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Clara T Friedman
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Cherlyn J Ng
- Department of Cognitive Sciences, The University of California, Irvine, Irvine, California
| | - Marlena A Chu
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | | |
Collapse
|
12
|
Shaw S, Kilpatrick ZP. Representing stimulus motion with waves in adaptive neural fields. J Comput Neurosci 2024; 52:145-164. [PMID: 38607466 DOI: 10.1007/s10827-024-00869-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 04/13/2024]
Abstract
Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.
Collapse
Affiliation(s)
- Sage Shaw
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, USA
| | - Zachary P Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, USA.
- Institute for Cognitive Sciences, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
13
|
Sun Q, Wang JY, Gong XM. Conflicts between short- and long-term experiences affect visual perception through modulating sensory or motor response systems: Evidence from Bayesian inference models. Cognition 2024; 246:105768. [PMID: 38479091 DOI: 10.1016/j.cognition.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 03/24/2024]
Abstract
The independent effects of short- and long-term experiences on visual perception have been discussed for decades. However, no study has investigated whether and how these experiences simultaneously affect our visual perception. To address this question, we asked participants to estimate their self-motion directions (i.e., headings) simulated from optic flow, in which a long-term experience learned in everyday life (i.e., straight-forward motion being more common than lateral motion) plays an important role. The headings were selected from three distributions that resembled a peak, a hill, and a flat line, creating different short-term experiences. Importantly, the proportions of headings deviating from the straight-forward motion gradually increased in the peak, hill, and flat distributions, leading to a greater conflict between long- and short-term experiences. The results showed that participants biased their heading estimates towards the straight-ahead direction and previously seen headings, which increased with the growing experience conflict. This suggests that both long- and short-term experiences simultaneously affect visual perception. Finally, we developed two Bayesian models (Model 1 vs. Model 2) based on two assumptions that the experience conflict altered the likelihood distribution of sensory representation or the motor response system. The results showed that both models accurately predicted participants' estimation biases. However, Model 1 predicted a higher variance of serial dependence compared to Model 2, while Model 2 predicted a higher variance of the bias towards the straight-ahead direction compared to Model 1. This suggests that the experience conflict can influence visual perception by affecting both sensory and motor response systems. Taken together, the current study systematically revealed the effects of long- and short-term experiences on visual perception and the underlying Bayesian processing mechanisms.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China; Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, PR China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, PR China.
| | - Jing-Yi Wang
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Xiu-Mei Gong
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
14
|
Futagawa K, Ikeda H, Negishi L, Kurumizaka H, Yamamoto A, Furihata K, Ito Y, Ikeya T, Nagata K, Funabara D, Suzuki M. Structural and Functional Analysis of the Amorphous Calcium Carbonate-Binding Protein Paramyosin in the Shell of the Pearl Oyster, Pinctada fucata. LANGMUIR : THE ACS JOURNAL OF SURFACES AND COLLOIDS 2024; 40:8373-8392. [PMID: 38606767 DOI: 10.1021/acs.langmuir.3c03820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Amorphous calcium carbonate (ACC) is an important precursor phase for the formation of aragonite crystals in the shells of Pinctada fucata. To identify the ACC-binding protein in the inner aragonite layer of the shell, extracts from the shell were used in the ACC-binding experiments. Semiquantitative analyses using liquid chromatography-mass spectrometry revealed that paramyosin was strongly associated with ACC in the shell. We discovered that paramyosin, a major component of the adductor muscle, was included in the myostracum, which is the microstructure of the shell attached to the adductor muscle. Purified paramyosin accumulates calcium carbonate and induces the prism structure of aragonite crystals, which is related to the morphology of prism aragonite crystals in the myostracum. Nuclear magnetic resonance measurements revealed that the Glu-rich region was bound to ACC. Activity of the Glu-rich region was stronger than that of the Asp-rich region. These results suggest that paramyosin in the adductor muscle is involved in the formation of aragonite prisms in the myostracum.
Collapse
Affiliation(s)
- Kei Futagawa
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Haruka Ikeda
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Lumi Negishi
- Institute for Quantitative Biosciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Hitoshi Kurumizaka
- Institute for Quantitative Biosciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Ayame Yamamoto
- Graduate School of Bioresources, Mie University, Tsu, Mie 514-8507, Japan
| | - Kazuo Furihata
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Yutaka Ito
- Department of Chemistry, Tokyo Metropolitan University, 1-1 minami-Osawa, Hachioji, Tokyo 192-0397, Japan
| | - Teppei Ikeya
- Department of Chemistry, Tokyo Metropolitan University, 1-1 minami-Osawa, Hachioji, Tokyo 192-0397, Japan
| | - Koji Nagata
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Daisuke Funabara
- Graduate School of Bioresources, Mie University, Tsu, Mie 514-8507, Japan
| | - Michio Suzuki
- Department of Applied Biological Chemistry, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| |
Collapse
|
15
|
Kingdom FAA, Yakobi Y, Wang XC. Stereoscopic slant contrast revisited. J Vis 2024; 24:24. [PMID: 38683571 PMCID: PMC11059801 DOI: 10.1167/jov.24.4.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 03/16/2024] [Indexed: 05/01/2024] Open
Abstract
The perceived slant of a stereoscopic surface is altered by the presence of a surrounding surface, a phenomenon termed stereo slant contrast. Previous studies have shown that a slanted surround causes a fronto-parallel surface to appear slanted in the opposite direction, an instance of "bidirectional" contrast. A few studies have examined slant contrast using slanted as opposed to fronto-parallel test surfaces, and these also have shown slant contrast. Here, we use a matching method to examine slant contrast over a wide range of combinations of surround and test slants, one aim being to determine whether stereo slant contrast transfers across opposite directions of test and surround slant. We also examine the effect of the test on the perceived slant of the surround. Test slant contrast was found to be bidirectional in virtually all test-surround combinations and transferred across opposite test and surround slants, with little or no decline in magnitude as the test-surround slant difference approached the limit. There was a weak bidirectional effect of the test slant on the perceived slant of the surround. We consider how our results might be explained by four mechanisms: (a) normalization of stereo slant to vertical; (b) divisive normalization of stereo slant channels in a manner analogous to the tilt illusion; (c) interactions between center and surround disparity-gradient detectors; and (d) uncertainty in slant estimation. We conclude that the third of these (interactions between center and surround disparity-gradient detectors) is the most likely cause of stereo slant contrast.
Collapse
Affiliation(s)
- Frederick A A Kingdom
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| | - Yoel Yakobi
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| | - Xingao Clara Wang
- McGill Vision Research, Department of Ophthalmology, Montréal General Hospital, Montréal, QC, Canada
| |
Collapse
|
16
|
Hahn M, Wei XX. A unifying theory explains seemingly contradictory biases in perceptual estimation. Nat Neurosci 2024; 27:793-804. [PMID: 38360947 DOI: 10.1038/s41593-024-01574-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 01/08/2024] [Indexed: 02/17/2024]
Abstract
Perceptual biases are widely regarded as offering a window into the neural computations underlying perception. To understand these biases, previous work has proposed a number of conceptually different, and even seemingly contradictory, explanations, including attraction to a Bayesian prior, repulsion from the prior due to efficient coding and central tendency effects on a bounded range. We present a unifying Bayesian theory of biases in perceptual estimation derived from first principles. We demonstrate theoretically an additive decomposition of perceptual biases into attraction to a prior, repulsion away from regions with high encoding precision and regression away from the boundary. The results reveal a simple and universal rule for predicting the direction of perceptual biases. Our theory accounts for, and yields, new insights regarding biases in the perception of a variety of stimulus attributes, including orientation, color and magnitude. These results provide important constraints on the neural implementations of Bayesian computations.
Collapse
Affiliation(s)
| | - Xue-Xin Wei
- Department of Neuroscience, Department of Psychology, Center for Perceptual Systems, Center for Learning and Memory, Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
17
|
Maruya A, Zaidi Q. Perceptual transitions between object rigidity and non-rigidity: Competition and cooperation among motion energy, feature tracking, and shape-based priors. J Vis 2024; 24:3. [PMID: 38306112 PMCID: PMC10848565 DOI: 10.1167/jov.24.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/20/2023] [Indexed: 02/03/2024] Open
Abstract
Why do moving objects appear rigid when projected retinal images are deformed non-rigidly? We used rotating rigid objects that can appear rigid or non-rigid to test whether shape features contribute to rigidity perception. When two circular rings were rigidly linked at an angle and jointly rotated at moderate speeds, observers reported that the rings wobbled and were not linked rigidly, but rigid rotation was reported at slow speeds. When gaps, paint, or vertices were added, the rings appeared rigidly rotating even at moderate speeds. At high speeds, all configurations appeared non-rigid. Salient features thus contribute to rigidity at slow and moderate speeds but not at high speeds. Simulated responses of arrays of motion-energy cells showed that motion flow vectors are predominantly orthogonal to the contours of the rings, not parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation gave a high probability of wobbling for the motion-energy flows. However, the convolutional neural network gave high probabilities of rotation for motion flows generated by tracking features with arrays of MT pattern-motion cells and corner detectors. In addition, circular rings can appear to spin and roll despite the absence of any sensory evidence, and this illusion is prevented by vertices, gaps, and painted segments, showing the effects of rotational symmetry and shape. Combining convolutional neural network outputs that give greater weight to motion energy at fast speeds and to feature tracking at slow speeds, with the shape-based priors for wobbling and rolling, explained rigid and non-rigid percepts across shapes and speeds (R2 = 0.95). The results demonstrate how cooperation and competition between different neuronal classes lead to specific states of visual perception and to transitions between the states.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| |
Collapse
|
18
|
Manavalan M, Song X, Nolte T, Fonagy P, Montague PR, Vilares I. Bayesian Decision-Making Under Uncertainty in Borderline Personality Disorder. J Pers Disord 2024; 38:53-74. [PMID: 38324252 DOI: 10.1521/pedi.2024.38.1.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Bayesian decision theory suggests that optimal decision-making should use and weigh prior beliefs with current information, according to their relative uncertainties. However, some characteristics of borderline personality disorder (BPD) patients, such as fast, drastic changes in the overall perception of themselves and others, suggest they may be underrelying on priors. Here, we investigated if BPD patients have a general deficit in relying on or combining prior with current information. We analyzed this by having BPD patients (n = 23) and healthy controls (n = 18) perform a coin-catching sensorimotor task with varying levels of prior and current information uncertainty. Our results indicate that BPD patients learned and used prior information and combined it with current information in a qualitatively Bayesian-like way. Our results show that, at least in a lower-level, nonsocial sensorimotor task, BPD patients can appropriately use both prior and current information, illustrating that potential deficits using priors may not be widespread or domain-general.
Collapse
Affiliation(s)
- Mathi Manavalan
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Xin Song
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Tobias Nolte
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Anna Freud National Centre for Children and Families, London, U.K
| | - Peter Fonagy
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Anna Freud National Centre for Children and Families, London, U.K
| | - P Read Montague
- Wellcome Centre for Human Neuroimaging, University College London, London, U.K
- Fralin Biomedical Research Institute at VTC, Virginia Polytechnic Institute and State University, Roanoke, Virginia
- Department of Physics, Virginia Polytechnic Institute and State University, Blacksburg, Virginia
| | - Iris Vilares
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
19
|
Luo R, Mai X, Meng J. Effect of motion state variability on error-related potentials during continuous feedback paradigms and their consequences for classification. J Neurosci Methods 2024; 401:109982. [PMID: 37839711 DOI: 10.1016/j.jneumeth.2023.109982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/11/2023] [Accepted: 10/11/2023] [Indexed: 10/17/2023]
Abstract
BACKGROUND An erroneous motion would elicit the error-related potential (ErrP) when humans monitor the behavior of the external devices. This EEG modality has been largely applied to brain-computer interface in an active or passive manner with discrete visual feedback. However, the effect of variable motion state on ErrP morphology and classification performance raises concerns when the interaction is conducted with continuous visual feedback. NEW METHOD In the present study, we designed a cursor control experiment. Participants monitored a continuously moving cursor to reach the target on one side of the screen. Motion state varied multiple times with two factors: (1) motion direction and (2) motion speed. The effects of these two factors on the morphological characteristics and classification performance of ErrP were analyzed. Furthermore, an offline simulation was performed to evaluate the effectiveness of the proposed extended ErrP-decoder in resolving the interference by motion direction changes. RESULTS The statistical analyses revealed that motion direction and motion speed significantly influenced the amplitude of feedback-ERN and frontal-Pe components, while only motion direction significantly affected the classification performance. COMPARISON WITH EXISTING METHODS Significant deviation was found in ErrP detection utilizing classical correct-versus-erroneous event training. However, this bias can be alleviated by 16% by the extended ErrP-decoder. CONCLUSION The morphology and classification performance of ErrP signal can be affected by motion state variability during continuous feedback paradigms. The results enhance the comprehension of ErrP morphological components and shed light on the detection of BCI's error behavior in practical continuous control.
Collapse
Affiliation(s)
- Ruijie Luo
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ximing Mai
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Meng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
20
|
Casartelli L, Maronati C, Cavallo A. From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability. Phys Life Rev 2023; 47:245-263. [PMID: 37976727 DOI: 10.1016/j.plrev.2023.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023]
Abstract
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differentiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
Collapse
Affiliation(s)
- Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Italy
| | - Camilla Maronati
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy
| | - Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
21
|
Sun Q, Gong XM, Zhan LZ, Wang SY, Dong LL. Serial dependence bias can predict the overall estimation error in visual perception. J Vis 2023; 23:2. [PMID: 37917052 PMCID: PMC10627302 DOI: 10.1167/jov.23.13.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023] Open
Abstract
Although visual feature estimations are accurate and precise, overall estimation errors (i.e., the difference between estimates and actual values) tend to show systematic patterns. For example, estimates of orientations are systematically biased away from horizontal and vertical orientations, showing an oblique illusion. Additionally, many recent studies have demonstrated that estimations of current visual features are systematically biased toward previously seen features, showing a serial dependence. However, no study examined whether the overall estimation errors were correlated with the serial dependence bias. To address this question, we enrolled three groups of participants to estimate orientation, motion speed, and point-light-walker direction. The results showed that the serial dependence bias explained over 20% of overall estimation errors in the three tasks, indicating that we could use the serial dependence bias to predict the overall estimation errors. The current study first demonstrated that the serial dependence bias was not independent from the overall estimation errors. This finding could inspire researchers to investigate the neural bases underlying the visual feature estimation and serial dependence.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China, PRC
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Si-Yu Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | | |
Collapse
|
22
|
Lee ARI, Wilcox LM, Allison RS. Perceiving depth and motion in depth from successive occlusion. J Vis 2023; 23:2. [PMID: 37796523 PMCID: PMC10561775 DOI: 10.1167/jov.23.12.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Occlusion, or interposition, is one of the strongest and best-known pictorial cues to depth. Furthermore, the successive occlusions of previous objects by newly presented objects produces an impression of increasing depth. Although the perceived motion associated with this illusion has been studied, the depth percept has not. To investigate, participants were presented with two piles of disks with one always static and the other either a static pile or a stacking pile where a new disk was added every 200 ms. We found static piles with equal number of disks appeared equal in height. In contrast, the successive presentation of disks in the stacking condition appeared to enhance the perceived height of the stack-fewer disks were needed to match the static pile. Surprisingly, participants were also more precise when comparing stacking versus static piles of disks. Reversing the stacking by removing rather than adding disks reversed the bias and degraded precision. In follow-up experiments, we used nonoverlapping static and dynamic configurations to show that the effects are not due to simple differences in perceived numerosity. In sum, our results show that successive occlusions generate a greater sense of height than occlusion alone, and we posit that dynamic occlusion may be an underappreciated source of depth information.
Collapse
Affiliation(s)
- Abigail R I Lee
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Laurie M Wilcox
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Robert S Allison
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
23
|
Quillien T, Tooby J, Cosmides L. Rational inferences about social valuation. Cognition 2023; 239:105566. [PMID: 37499313 DOI: 10.1016/j.cognition.2023.105566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 06/20/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023]
Abstract
The decisions made by other people can contain information about the value they assign to our welfare-for example how much they are willing to sacrifice to make us better off. An emerging body of research suggests that we extract and use this information, responding more favorably to those who sacrifice more even if they provide us with less. The magnitude of their trade-offs governs our social responses to them-including partner choice, giving, and anger. This implies that people have well-designed cognitive mechanisms for estimating the weight someone else assigns to their welfare, even when the amounts at stake vary and the information is noisy or sparse. We tested this hypothesis in two studies (N=200; US samples) by asking participants to observe a partner make two trade-offs, and then predict the partner's decisions in other trials. Their predictions were compared to those of a model that uses statistically optimal procedures, operationalized as a Bayesian ideal observer. As predicted, (i) the estimates people made from sparse evidence matched those of the ideal observer, and (ii) lower welfare trade-offs elicited more anger from participants, even when their total payoffs were held constant. These results support the view that people efficiently update their representations of how much others value them. They also provide the most direct test to date of a key assumption of the recalibrational theory of anger: that anger is triggered by cues of low valuation, not by the infliction of costs.
Collapse
Affiliation(s)
- Tadeg Quillien
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Psychological & Brain Sciences, University of California, Santa Barbara, United States of America.
| | - John Tooby
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Anthropology, University of California, Santa Barbara, United States of America
| | - Leda Cosmides
- Center for Evolutionary Psychology, University of California, Santa Barbara, United States of America; Department of Psychological & Brain Sciences, University of California, Santa Barbara, United States of America
| |
Collapse
|
24
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
25
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
26
|
Fulvio JM, Rokers B, Samaha J. Task feedback suggests a post-perceptual component to serial dependence. J Vis 2023; 23:6. [PMID: 37682557 PMCID: PMC10500366 DOI: 10.1167/jov.23.10.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/14/2023] [Indexed: 09/09/2023] Open
Abstract
Decisions across a range of perceptual tasks are biased toward past stimuli. Such serial dependence is thought to be an adaptive low-level mechanism that promotes perceptual stability across time. However, recent studies suggest post-perceptual mechanisms may also contribute to serially biased responses, calling into question a single locus of serial dependence and the nature of integration of past and present sensory inputs. We measured serial dependence in the context of a three-dimensional (3D) motion perception task where uncertainty in the sensory information varied substantially from trial to trial. We found that serial dependence varied with stimulus properties that impact sensory uncertainty on the current trial. Reduced stimulus contrast was associated with an increased bias toward the stimulus direction of the previous trial. Critically, performance feedback, which reduced sensory uncertainty, abolished serial dependence. These results provide clear evidence for a post-perceptual locus of serial dependence in 3D motion perception and support the role of serial dependence as a response strategy in the face of substantial sensory uncertainty.
Collapse
Affiliation(s)
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | - Jason Samaha
- Department of Psychology, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
27
|
Yu K, Tuerlinckx F, Vanpaemel W, Zaman J. Humans display interindividual differences in the latent mechanisms underlying fear generalization behaviour. COMMUNICATIONS PSYCHOLOGY 2023; 1:5. [PMID: 39242719 PMCID: PMC11290606 DOI: 10.1038/s44271-023-00005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/13/2023] [Indexed: 09/09/2024]
Abstract
Human generalization research aims to understand the processes underlying the transfer of prior experiences to new contexts. Generalization research predominantly relies on descriptive statistics, assumes a single generalization mechanism, interprets generalization from mono-source data, and disregards individual differences. Unfortunately, such an approach fails to disentangle various mechanisms underlying generalization behaviour and can readily result in biased conclusions regarding generalization tendencies. Therefore, we combined a computational model with multi-source data to mechanistically investigate human generalization behaviour. By simultaneously modelling learning, perceptual and generalization data at the individual level, we revealed meaningful variations in how different mechanisms contribute to generalization behaviour. The current research suggests the need for revising the theoretical and analytic foundations in the field to shift the attention away from forecasting group-level generalization behaviour and toward understanding how such phenomena emerge at the individual level. This raises the question for future research whether a mechanism-specific differential diagnosis may be beneficial for generalization-related psychiatric disorders.
Collapse
Affiliation(s)
| | | | | | - Jonas Zaman
- KU Leuven, Leuven, Belgium
- University of Hasselt, Hasselt, Belgium
| |
Collapse
|
28
|
Charlton JA, Młynarski WF, Bai YH, Hermundstad AM, Goris RLT. Environmental dynamics shape perceptual decision bias. PLoS Comput Biol 2023; 19:e1011104. [PMID: 37289753 DOI: 10.1371/journal.pcbi.1011104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/13/2023] [Indexed: 06/10/2023] Open
Abstract
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | | | - Yoon H Bai
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
29
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:e82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Jonathan Matthis
- Department of Biology, Northeastern UniversityBostonUnited States
| | - Kathryn Bonnen
- School of Optometry, Indiana UniversityBloomingtonUnited States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at AustinAustinUnited States
| |
Collapse
|
30
|
Menceloglu M, Song JH. Motion duration is overestimated behind an occluder in action and perception tasks. J Vis 2023; 23:11. [PMID: 37171804 PMCID: PMC10184779 DOI: 10.1167/jov.23.5.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023] Open
Abstract
Motion estimation behind an occluder is a common task in situations like crossing the street or passing another car. People tend to overestimate the duration of an object's motion when it gets occluded for subsecond motion durations. Here, we explored (a) whether this bias depended on the type of interceptive action: discrete keypress versus continuous reach and (b) whether it was present in a perception task without an interceptive action. We used a prediction-motion task and presented a bar moving across the screen with a constant velocity that later became occluded. In the action task, participants stopped the occluded bar when they thought the bar reached the goal position via keypress or reach. They were more likely to stop the bar after it passed the goal position regardless of the action type, suggesting that the duration of occluded motion was overestimated (or its speed was underestimated). In the perception task, where participants judged whether a tone was presented before or after the bar reached the goal position, a similar bias was observed. In both tasks, the bias was near constant across motion durations and directions and grew over trials. We speculate that this robust bias may be due to a temporal illusion, Bayesian slow-motion prior, or the processing of the visible-occluded boundary crossing. Understanding its exact mechanism, the conditions on which it depends, and the relative roles of speed and time perception requires further research.
Collapse
Affiliation(s)
- Melisa Menceloglu
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
31
|
Khayat N, Ahissar M, Hochstein S. Perceptual history biases in serial ensemble representation. J Vis 2023; 23:7. [PMID: 36920389 PMCID: PMC10029768 DOI: 10.1167/jov.23.3.7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/03/2023] [Indexed: 03/16/2023] Open
Abstract
Ensemble perception refers to the visual system's ability to efficiently represent groups of similar objects as a unified percept using their summary statistical information. Most studies focused on extraction of current trial averages, giving little attention to prior experience effects, although a few recent studies found that ensemble mean estimations contract toward previously presented stimuli, with most of these focusing on explicit perceptual averaging of simultaneously presented item ensembles. Yet, the time element is crucial in real dynamic environments, where we encounter ensemble items over time, aggregating information until reaching summary representations. Moreover, statistical information of objects and scenes is learned over time and often implicitly and then used for predictions that shape perception, promoting environmental stability. Therefore, we now focus on temporal aspects of ensemble statistics and test whether prior information, beyond the current trial, biases implicit perceptual decisions. We designed methods to separate current trial biases from those of previously seen trial ensembles. In each trial, six circles of different sizes were presented serially, followed by two test items. Participants were asked to choose which was present in the sequence. Participants unconsciously rely on ensemble statistics, choosing stimuli closer to the ensemble mean. To isolate the influence of earlier trials, the two test items were sometimes equidistant from the current trial mean. Results showed membership judgment biases toward current trial mean, when informative (largest effect). On equidistant trials, judgments were biased toward previously experienced stimulus statistics. Comparison of similar conditions with a shifted stimulus distribution ruled out a bias toward an earlier, presession, prototypical diameter. We conclude that ensemble perception, even for temporally experienced ensembles, is influenced not only by current trial mean but also by means of recently seen ensembles and that these influences are somewhat correlated on a participant-by-participant basis.
Collapse
Affiliation(s)
- Noam Khayat
- ELSC Edmond & Lily Safra Center for Brain Research & Life Sciences Institute, Hebrew University, Jerusalem, Israel
| | - Merav Ahissar
- ELSC Edmond & Lily Safra Center for Brain Research & Psychology Department, Hebrew University, Jerusalem, Israel
| | - Shaul Hochstein
- ELSC Edmond & Lily Safra Center for Brain Research & Life Sciences Institute, Hebrew University, Jerusalem, Israel
| |
Collapse
|
32
|
Korai Y, Miura K. A dynamical model of visual motion processing for arbitrary stimuli including type II plaids. Neural Netw 2023; 162:46-68. [PMID: 36878170 DOI: 10.1016/j.neunet.2023.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 02/23/2023] [Accepted: 02/25/2023] [Indexed: 03/04/2023]
Abstract
To explore the operating principle of visual motion processing in the brain underlying perception and eye movements, we model the information processing of velocity estimate of the visual stimulus at the algorithmic level using the dynamical system approach. In this study, we formulate the model as an optimization process of an appropriately defined objective function. The model is applicable to arbitrary visual stimuli. We find that our theoretical predictions qualitatively agree with time evolution of eye movement reported by previous works across various types of stimulus. Our results suggest that the brain implements the present framework as the internal model of motion vision. We anticipate our model to be a promising building block for more profound understanding of visual motion processing as well as for the development of robotics.
Collapse
Affiliation(s)
- Yusuke Korai
- Integrated Clinical Education Center, Kyoto University Hospital, Kyoto University, Kyoto 606-8507, Japan.
| | - Kenichiro Miura
- Graduate School of Medicine, Kyoto University, Kyoto 606-8501, Japan; Department of Pathology of Mental Diseases, National Institute of Mental Health, National Center of Neurology and Psychiatry, Tokyo 187-8551, Japan.
| |
Collapse
|
33
|
Recurrent networks endowed with structural priors explain suboptimal animal behavior. Curr Biol 2023; 33:622-638.e7. [PMID: 36657448 DOI: 10.1016/j.cub.2022.12.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/03/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2023]
Abstract
The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.
Collapse
|
34
|
Priming of probabilistic attentional templates. Psychon Bull Rev 2023; 30:22-39. [PMID: 35831678 DOI: 10.3758/s13423-022-02125-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Attentional priming has a dominating influence on vision, speeding visual search, releasing items from crowding, reducing masking effects, and during free-choice, primed targets are chosen over unprimed ones. Many accounts postulate that templates stored in working memory control what we attend to and mediate the priming. But what is the nature of these templates (or representations)? Analyses of real-world visual scenes suggest that tuning templates to exact color or luminance values would be impractical since those can vary greatly because of changes in environmental circumstances and perceptual interpretation. Tuning templates to a range of the most probable values would be more efficient. Recent evidence does indeed suggest that the visual system represents such probability, gradually encoding statistical variation in the environment through repeated exposure to input statistics. This is consistent with evidence from neurophysiology and theoretical neuroscience as well as computational evidence of probabilistic representations in visual perception. I argue that such probabilistic representations are the unit of attentional priming and that priming of, say, a repeated single-color value simply involves priming of a distribution with no variance. This "priming of probability" view can be modelled within a Bayesian framework where priming provides contextual priors. Priming can therefore be thought of as learning of the underlying probability density function of the target or distractor sets in a given continuous task.
Collapse
|
35
|
Variability of dot spread is overestimated. Atten Percept Psychophys 2023; 85:494-504. [PMID: 35708846 DOI: 10.3758/s13414-022-02528-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/08/2022] [Indexed: 11/08/2022]
Abstract
Previous research has demonstrated that individuals exhibit a tendency to overestimate the variability of both low-level features (e.g., color, orientation) and mid-level features (e.g., size) when items are presented dynamically in a sequential order, a finding we will refer to as the variability overestimation effect. Because previous research on this bias used sequential displays, an open question is whether the effect is due to a memory-related bias or a vision-related bias. To assess whether the bias would also be apparent with static, simultaneous displays, and to examine whether the bias generalizes to spatial properties, we tested participants' perception of the variability of a cluster of dots. Results showed a consistent overestimation bias: Participants judged the dots as being more spread than they actually were. The variability overestimation effect was observed when there were 10 or 20 dots but not when there were 50 dots. Taken together, the results of the current study contribute to the ensemble perception literature by providing evidence that simultaneously presented stimuli are also susceptible to the variability overestimation effect. The use of static displays further demonstrates that this bias is present in both dynamic and static contexts, suggesting an inherent bias existent in the human visual system. A potential theoretical account-boundary effect-is discussed as a potential underlying mechanism. Moreover, the present study has implications for common visual tasks carried out in real-world scenarios, such as a radiologist making judgments about distribution of calcification in breast cancer diagnoses.
Collapse
|
36
|
Jeong W, Kim S, Park J, Lee J. Multivariate EEG activity reflects the Bayesian integration and the integrated Galilean relative velocity of sensory motion during sensorimotor behavior. Commun Biol 2023; 6:113. [PMID: 36709242 PMCID: PMC9884247 DOI: 10.1038/s42003-023-04481-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/12/2023] [Indexed: 01/29/2023] Open
Abstract
Humans integrate multiple sources of information for action-taking, using the reliability of each source to allocate weight to the data. This reliability-weighted information integration is a crucial property of Bayesian inference. In this study, participants were asked to perform a smooth pursuit eye movement task in which we independently manipulated the reliability of pursuit target motion and the direction-of-motion cue. Through an analysis of pursuit initiation and multivariate electroencephalography activity, we found neural and behavioral evidence of Bayesian information integration: more attraction toward the cue direction was generated when the target motion was weak and unreliable. Furthermore, using mathematical modeling, we found that the neural signature of Bayesian information integration had extra-retinal origins, although most of the multivariate electroencephalography activity patterns during pursuit were best correlated with the retinal velocity errors accumulated over time. Our results demonstrated neural implementation of Bayesian inference in human oculomotor behavior.
Collapse
Affiliation(s)
- Woojae Jeong
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Seolmin Kim
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| | - JeongJun Park
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.4367.60000 0001 2355 7002Division of Biology and Biomedical Sciences, Program in Neurosciences, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Joonyeol Lee
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| |
Collapse
|
37
|
Yang L, Lei W, Zhang W, Ye T. Dual-flow network with attention for autonomous driving. Front Neurorobot 2023; 16:978225. [PMID: 36699946 PMCID: PMC9868693 DOI: 10.3389/fnbot.2022.978225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/08/2022] [Indexed: 01/11/2023] Open
Abstract
We present a dual-flow network for autonomous driving using an attention mechanism. The model works as follows: (i) The perception network extracts red, blue, and green (RGB) images from the video at low speed as input and performs feature extraction of the images; (ii) The motion network obtains grayscale images from the video at high speed as the input and completes the extraction of object motion features; (iii) The perception and motion networks are fused using an attention mechanism at each feature layer to perform the waypoint prediction. The model was trained and tested using the CARLA simulator and enabled autonomous driving in complex urban environments, achieving a success rate of 74%, especially in the case of multiple dynamic objects.
Collapse
Affiliation(s)
- Lei Yang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China,DAMO Academy, Alibaba Group, Hangzhou, China
| | - Weimin Lei
- School of Computer Science and Engineering, Northeastern University, Shenyang, China,Engineering Research Center of Security Technology of Complex Network System Ministry of Education, Shenyang, China,*Correspondence: Weimin Lei ✉
| | - Wei Zhang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Tianbing Ye
- DAMO Academy, Alibaba Group, Hangzhou, China
| |
Collapse
|
38
|
Do Q, Li Y, Kane GA, McGuire JT, Scott BB. Assessing evidence accumulation and rule learning in humans with an online game. J Neurophysiol 2023; 129:131-143. [PMID: 36475830 DOI: 10.1152/jn.00124.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Evidence accumulation, an essential component of perception and decision making, is frequently studied with psychophysical tasks involving noisy or ambiguous stimuli. In these tasks, participants typically receive verbal or written instructions that describe the strategy that should be used to guide decisions. Although convenient and effective, explicit instructions can influence learning and decision making strategies and can limit comparisons with animal models, in which behaviors are reinforced through feedback. Here, we developed an online video game and nonverbal training pipeline, inspired by pulse-based tasks for rodents, as an alternative to traditional psychophysical tasks used to study evidence accumulation. Using this game, we collected behavioral data from hundreds of participants trained with an explicit description of the decision rule or with experiential feedback. Participants trained with feedback alone learned the game rules rapidly and used strategies and displayed biases similar to those who received explicit instructions. Finally, by leveraging data across hundreds of participants, we show that perceptual judgments were well described by an accumulation process in which noise scaled nonlinearly with evidence, consistent with previous animal studies but inconsistent with diffusion models widely used to describe perceptual decisions in humans. These results challenge the conventional description of the accumulation process and suggest that online games provide a valuable platform to examine perceptual decision making and learning in humans. In addition, the feedback-based training pipeline developed for this game may be useful for evaluating perceptual decision making in human populations with difficulty following verbal instructions.NEW & NOTEWORTHY Perceptual uncertainty sets critical constraints on our ability to accumulate evidence and make decisions; however, its sources remain unclear. We developed a video game, and feedback-based training pipeline, to study uncertainty during decision making. Leveraging choices from hundreds of subjects, we demonstrate that human choices are inconsistent with popular diffusion models of human decision making and instead are best fit by models in which perceptual uncertainty scales nonlinearly with the strength of sensory evidence.
Collapse
Affiliation(s)
- Quan Do
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Yutong Li
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Gary A Kane
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Joseph T McGuire
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Benjamin B Scott
- Department of Psychological and Brain Sciences and Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| |
Collapse
|
39
|
Confidence reflects a noisy decision reliability estimate. Nat Hum Behav 2023; 7:142-154. [PMID: 36344656 DOI: 10.1038/s41562-022-01464-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 09/21/2022] [Indexed: 11/09/2022]
Abstract
Decisions vary in difficulty. Humans know this and typically report more confidence in easy than in difficult decisions. However, confidence reports do not perfectly track decision accuracy, but also reflect response biases and difficulty misjudgements. To isolate the quality of confidence reports, we developed a model of the decision-making process underlying choice-confidence data. In this model, confidence reflects a subject's estimate of the reliability of their decision. The quality of this estimate is limited by the subject's uncertainty about the uncertainty of the variable that informs their decision ('meta-uncertainty'). This model provides an accurate account of choice-confidence data across a broad range of perceptual and cognitive tasks, investigated in six previous studies. We find meta-uncertainty varies across subjects, is stable over time, generalizes across some domains and can be manipulated experimentally. The model offers a parsimonious explanation for the computational processes that underlie and constrain the sense of confidence.
Collapse
|
40
|
Abekawa N, Doya K, Gomi H. Body and visual instabilities functionally modulate implicit reaching corrections. iScience 2022; 26:105751. [PMID: 36590158 PMCID: PMC9800534 DOI: 10.1016/j.isci.2022.105751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 07/31/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Hierarchical brain-information-processing schemes have frequently assumed that the flexible but slow voluntary action modulates a direct sensorimotor process that can quickly generate a reaction in dynamical interaction. Here we show that the quick visuomotor process for manual movement is modulated by postural and visual instability contexts that are related but remote and prior states to manual movements. A preceding unstable postural context significantly enhanced the reflexive manual response induced by a large-field visual motion during hand reaching while the response was evidently weakened by imposing a preceding random-visual-motion context. These modulations are successfully explained by the Bayesian optimal formulation in which the manual response elicited by visual motion is ascribed to the compensatory response to the estimated self-motion affected by the preceding contextual situations. Our findings suggest an implicit and functional mechanism that links the variability and uncertainty of remote states to the quick sensorimotor transformation.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan
| | - Kenji Doya
- Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan,Corresponding author
| |
Collapse
|
41
|
Alexander E, Cai LT, Fuchs S, Hladnik TC, Zhang Y, Subramanian V, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Thiele TR, Arrenberg AB, Cooper EA. Optic flow in the natural habitats of zebrafish supports spatial biases in visual self-motion estimation. Curr Biol 2022; 32:5008-5021.e8. [PMID: 36327979 PMCID: PMC9729457 DOI: 10.1016/j.cub.2022.10.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 08/15/2022] [Accepted: 10/05/2022] [Indexed: 12/12/2022]
Abstract
Animals benefit from knowing if and how they are moving. Across the animal kingdom, sensory information in the form of optic flow over the visual field is used to estimate self-motion. However, different species exhibit strong spatial biases in how they use optic flow. Here, we show computationally that noisy natural environments favor visual systems that extract spatially biased samples of optic flow when estimating self-motion. The performance associated with these biases, however, depends on interactions between the environment and the animal's brain and behavior. Using the larval zebrafish as a model, we recorded natural optic flow associated with swimming trajectories in the animal's habitat with an omnidirectional camera mounted on a mechanical arm. An analysis of these flow fields suggests that lateral regions of the lower visual field are most informative about swimming speed. This pattern is consistent with the recent findings that zebrafish optomotor responses are preferentially driven by optic flow in the lateral lower visual field, which we extend with behavioral results from a high-resolution spherical arena. Spatial biases in optic-flow sampling are likely pervasive because they are an effective strategy for determining self-motion in noisy natural environments.
Collapse
Affiliation(s)
- Emma Alexander
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Present address: Department of Computer Science, Northwestern University, Evanston, IL 60208, USA,Lead contact,Correspondence:
| | - Lanya T. Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Present address: Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Sabrina Fuchs
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Tim C. Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany,Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Yue Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany,Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany,Present address: Department of Cellular and Systems Neurobiology, Max Planck Institute for Biological Intelligence in Foundation, 82152 Martinsried, Germany
| | - Venkatesh Subramanian
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada
| | - Nicholas C. Guilbeault
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada,Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, St. Andrew’s College, Gorakhpur, Uttar Pradesh 273001, India
| | - Muthukumarasamy Arunachalam
- Department of Zoology, School of Biological Sciences, Central University of Kerala, Kerala 671316, India,Present address: Centre for Inland Fishes and Conservation, St. Andrew’s College, Gorakhpur, Uttar Pradesh 273001, India
| | - Scott A. Juntti
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Tod R. Thiele
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada,Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Aristides B. Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Emily A. Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
42
|
Freeman TCA, Powell G. Perceived speed at low luminance: Lights out for the Bayesian observer? Vision Res 2022; 201:108124. [PMID: 36193604 DOI: 10.1016/j.visres.2022.108124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 07/21/2022] [Accepted: 09/06/2022] [Indexed: 11/06/2022]
Abstract
To account for perceptual bias, Bayesian models use the precision of early sensory measurements to weight the influence of prior expectations. As precision decreases, prior expectations start to dominate. Important examples come from motion perception, where the slow-motion prior has been used to explain a variety of motion illusions in vision, hearing, and touch, many of which correlate appropriately with threshold measures of underlying precision. However, the Bayesian account seems defeated by the finding that moving objects appear faster in the dark, because most motion thresholds are worse at low luminance. Here we show this is not the case for speed discrimination. Our results show that performance improves at low light levels by virtue of a perceived contrast cue that is more salient in the dark. With this cue removed, discrimination becomes independent of luminance. However, we found perceived speed still increased in the dark for the same observers, and by the same amount. A possible interpretation is that motion processing is therefore not Bayesian, because our findings challenge a key assumption these models make, namely that the accuracy of early sensory measurements is independent of basic stimulus properties like luminance. However, a final experiment restored Bayesian behaviour by adding external noise, making discrimination worse and slowing perceived speed down. Our findings therefore suggest that motion is processed in a Bayesian fashion but based on noisy sensory measurements that also vary in accuracy.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom.
| | - Georgie Powell
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom
| |
Collapse
|
43
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Grants
- U19 NS118246 NINDS NIH HHS
- U.S. Department of Health & Human Services | NIH | National Institute of Neurological Disorders and Stroke (NINDS)
- James S. McDonnell Foundation (McDonnell Foundation)
- This research was supported by grants from the NIH (NINDS U19NS118246, J.D.), the James S. McDonnell Foundation (Scholar Award for Understanding Human Cognition, Grant 220020462, J.D.), the Harvard Brain Science Initiative (Collaborative Seed Grant, J.D.\ & S.J.G.), and the Center for Brains, Minds, and Machines (CBMM; funded by NSF STC award CCF-1231216, S.J.G.).
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
44
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
45
|
Fedorenko E, Ryskin R, Gibson E. Agrammatic output in non-fluent, including Broca's, aphasia as a rational behavior. APHASIOLOGY 2022; 37:1981-2000. [PMID: 38213953 PMCID: PMC10782888 DOI: 10.1080/02687038.2022.2143233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 10/31/2022] [Indexed: 01/13/2024]
Abstract
Background Speech of individuals with non-fluent, including Broca's, aphasia is often characterized as "agrammatic" because their output mostly consists of nouns and, to a lesser extent, verbs and lacks function words, like articles and prepositions, and correct morphological endings. Among the earliest accounts of agrammatic output in the early 1900s was the "economy of effort" idea whereby agrammatic output is construed as a way of coping with increases in the cost of language production. This idea resurfaced in the 1980s, but in general, the field of language research has largely focused on accounts of agrammatism that postulated core deficits in syntactic knowledge. Aims We here revisit the economy of effort hypothesis in light of increasing emphasis in cognitive science on rational and efficient behavior. Main contribution The critical idea is as follows: there is a cost per unit of linguistic output, and this cost is greater for patients with non-fluent aphasia. For a rational agent, this increase leads to shorter messages. Critically, the informative parts of the message should be preserved and the redundant ones (like the function words and inflectional markers) should be omitted. Although economy of effort is unlikely to provide a unifying account of agrammatic output in all patients-the relevant population is too heterogeneous and the empirical landscape too complex for any single-factor explanation-we argue that the idea of agrammatic output as a rational behavior was dismissed prematurely and appears to provide a plausible explanation for a large subset of the reported cases of expressive aphasia. Conclusions The rational account of expressive agrammatism should be evaluated more carefully and systematically. On the basic research side, pursuing this hypothesis may reveal how the human mind and brain optimize communicative efficiency in the presence of production difficulties. And on the applied side, this construal of expressive agrammatism emphasizes the strengths of some patients to flexibly adapt utterances in order to communicate in spite of grammatical difficulties; and focusing on these strengths may be more effective than trying to "fix" their grammar.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Massachusetts Institute of Technology, Brain & Cognitive Sciences Department
- Massachusetts Institute of Technology, McGovern Institute for Brain Research
- Speech and Hearing in Bioscience and Technology program at Harvard University
| | - Rachel Ryskin
- University of California at Merced, Cognitive & Information Sciences Department
| | - Edward Gibson
- Massachusetts Institute of Technology, Brain & Cognitive Sciences Department
| |
Collapse
|
46
|
Incao S, Mazzola C, Sciutti A. The impact of early aging on visual perception of space and time. Front Hum Neurosci 2022; 16:988644. [DOI: 10.3389/fnhum.2022.988644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 10/24/2022] [Indexed: 11/17/2022] Open
Abstract
Visual perception of space and time has been shown to rely on context dependency, an inferential process by which the average magnitude of a series of stimuli previously experienced acts as a prior during perception. This article aims to investigate the presence and evolution of this phenomenon in early aging. Two groups of participants belonging to two different age ranges (Young Adults: average age 28.8 years old; Older Adults: average age 62.8 years old) participated in the study performing a discrimination and a reproduction task, both in a spatial and temporal conditions. In particular, they were asked to evaluate lengths in the spatial domain and interval durations in the temporal one. Early aging resulted to be associated to a general decline of the perceptual acuity, which is particularly evident in the temporal condition. The context dependency phenomenon was preserved also during aging, maintaining similar levels as those exhibited by the younger group in both space and time perception. However, the older group showed a greater variability in context dependency among participants, perhaps due to different strategies used to face a higher uncertainty in the perceptual process.
Collapse
|
47
|
Koerfer K, Lappe M. Perceived movement of nonrigid motion patterns. PNAS NEXUS 2022; 1:pgac088. [PMID: 36741440 PMCID: PMC9896959 DOI: 10.1093/pnasnexus/pgac088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 06/16/2022] [Indexed: 02/07/2023]
Abstract
Nonrigid materials such as liquids or smoke deform over time. Little is known about the visual perception of nonrigid motion other than that many motion cues associated with rigid motion perception are not reliable for nonrigid motion. Nonrigid motion patterns lack clear borders and their movement can be inconsistent with the motion of their parts. We developed a novel stimulus that creates a nonrigid vortex motion pattern in a random dot distribution and decouples the movement of the vortex from the first-order motion of the dots. We presented three moving vortices that entailed consecutively fewer motion cues, eliminating occlusion, motion borders, and velocity field gradients in the process. Subjects were well able to report the end position and travel path in all cases, showing that nonrigid motion is perceived through an analysis of the temporal evolution of visual motion patterns and does not require borders or speed differences. Adding a coherent global motion did not hamper perception, but adding local noise did, indicating that the visual system uses mid-level features that are on a local scale. We also found that participants judged the movement of the nonrigid motion patterns slower than a rigid control, revealing that speed perception was based on a combination of motion of the parts and movement of the pattern. We propose that the visual system uses the temporal evolution of a motion pattern for the perception of nonrigid motion and suggest a plausible mechanism based on the curl of the motion field.
Collapse
Affiliation(s)
- Krischan Koerfer
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| |
Collapse
|
48
|
Harris DJ, Arthur T, Broadbent DP, Wilson MR, Vine SJ, Runswick OR. An Active Inference Account of Skilled Anticipation in Sport: Using Computational Models to Formalise Theory and Generate New Hypotheses. Sports Med 2022; 52:2023-2038. [PMID: 35503403 PMCID: PMC9388417 DOI: 10.1007/s40279-022-01689-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2022] [Indexed: 11/30/2022]
Abstract
Optimal performance in time-constrained and dynamically changing environments depends on making reliable predictions about future outcomes. In sporting tasks, performers have been found to employ multiple information sources to maximise the accuracy of their predictions, but questions remain about how different information sources are weighted and integrated to guide anticipation. In this paper, we outline how predictive processing approaches, and active inference in particular, provide a unifying account of perception and action that explains many of the prominent findings in the sports anticipation literature. Active inference proposes that perception and action are underpinned by the organism’s need to remain within certain stable states. To this end, decision making approximates Bayesian inference and actions are used to minimise future prediction errors during brain–body–environment interactions. Using a series of Bayesian neurocomputational models based on a partially observable Markov process, we demonstrate that key findings from the literature can be recreated from the first principles of active inference. In doing so, we formulate a number of novel and empirically falsifiable hypotheses about human anticipation capabilities that could guide future investigations in the field.
Collapse
Affiliation(s)
- David J Harris
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK.
| | - Tom Arthur
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - David P Broadbent
- Division of Sport, Health and Exercise Sciences, Department of Life Sciences, Brunel University London, London, UK
| | - Mark R Wilson
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Samuel J Vine
- School of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Exeter, EX1 2LU, UK
| | - Oliver R Runswick
- Department of Psychology, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| |
Collapse
|
49
|
Maier M, Blume F, Bideau P, Hellwich O, Abdel Rahman R. Knowledge-augmented face perception: Prospects for the Bayesian brain-framework to align AI and human vision. Conscious Cogn 2022; 101:103301. [DOI: 10.1016/j.concog.2022.103301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/27/2021] [Accepted: 01/04/2022] [Indexed: 11/03/2022]
|
50
|
Zhang LQ, Stocker AA. Prior Expectations in Visual Speed Perception Predict Encoding Characteristics of Neurons in Area MT. J Neurosci 2022; 42:2951-2962. [PMID: 35169018 PMCID: PMC8985856 DOI: 10.1523/jneurosci.1920-21.2022] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022] Open
Abstract
Bayesian inference provides an elegant theoretical framework for understanding the characteristic biases and discrimination thresholds in visual speed perception. However, the framework is difficult to validate because of its flexibility and the fact that suitable constraints on the structure of the sensory uncertainty have been missing. Here, we demonstrate that a Bayesian observer model constrained by efficient coding not only well explains human visual speed perception but also provides an accurate quantitative account of the tuning characteristics of neurons known for representing visual speed. Specifically, we found that the population coding accuracy for visual speed in area MT ("neural prior") is precisely predicted by the power-law, slow-speed prior extracted from fitting the Bayesian observer model to psychophysical data ("behavioral prior") to the point that the two priors are indistinguishable in a cross-validation model comparison. Our results demonstrate a quantitative validation of the Bayesian observer model constrained by efficient coding at both the behavioral and neural levels.SIGNIFICANCE STATEMENT Statistical regularities of the environment play an important role in shaping both neural representations and perceptual behavior. Most previous work addressed these two aspects independently. Here we present a quantitative validation of a theoretical framework that makes joint predictions for neural coding and behavior, based on the assumption that neural representations of sensory information are efficient but also optimally used in generating a percept. Specifically, we demonstrate that the neural tuning characteristics for visual speed in brain area MT are precisely predicted by the statistical prior expectations extracted from psychophysical data. As such, our results provide a normative link between perceptual behavior and the neural representation of sensory information in the brain.
Collapse
|