1
|
Cusimano M, Hewitt LB, McDermott JH. Listening with generative models. Cognition 2024; 253:105874. [PMID: 39216190 DOI: 10.1016/j.cognition.2024.105874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 03/31/2024] [Accepted: 07/03/2024] [Indexed: 09/04/2024]
Abstract
Perception has long been envisioned to use an internal model of the world to explain the causes of sensory signals. However, such accounts have historically not been testable, typically requiring intractable search through the space of possible explanations. Using auditory scenes as a case study, we leveraged contemporary computational tools to infer explanations of sounds in a candidate internal generative model of the auditory world (ecologically inspired audio synthesizers). Model inferences accounted for many classic illusions. Unlike traditional accounts of auditory illusions, the model is applicable to any sound, and exhibited human-like perceptual organization for real-world sound mixtures. The combination of stimulus-computability and interpretable model structure enabled 'rich falsification', revealing additional assumptions about sound generation needed to account for perception. The results show how generative models can account for the perception of both classic illusions and everyday sensory signals, and illustrate the opportunities and challenges involved in incorporating them into theories of perception.
Collapse
Affiliation(s)
- Maddie Cusimano
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America.
| | - Luke B Hewitt
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States of America; McGovern Institute, Massachusetts Institute of Technology, United States of America; Center for Brains Minds and Machines, Massachusetts Institute of Technology, United States of America; Speech and Hearing Bioscience and Technology, Harvard University, United States of America.
| |
Collapse
|
2
|
Wang T, Fang Y, Whitney D. Efficient Coding in Motor Planning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.30.615975. [PMID: 39416082 PMCID: PMC11483078 DOI: 10.1101/2024.09.30.615975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
A paramount challenge for the brain is to precisely model the world and control behavior within the confines of limited encoding capacities. Efficient coding theory posits a unified framework for understanding how neural systems enhance encoding accuracy by tuning to environmental statistics. While this theory has been thoroughly explored within the perceptual realm, it is less clear how efficient coding applies to the motor system. Here, we probe the core principles of efficient coding theory through center-out reaching tasks. Our results reveal novel sequential effects in motor planning. Specifically, current movements are biased in a direction opposite to recent movements, and movement variance increases with the angular divergence between successive actions. These effects are modulated by the variability within the motor system: a larger repulsive bias is observed when movements are performed with the nondominant hand compared to the dominant hand, and in individuals exhibiting higher motor variance compared to those with lower variance. These behavioral findings align with the predictions of an efficient coding model, suggesting that the motor system rapidly adapts to the context to enhance accuracy in motor planning.
Collapse
Affiliation(s)
- Tianhe Wang
- Department of Psychology, University of California, Berkeley
- Department of Neuroscience, University of California, Berkeley
- Those authors contribute equally
| | - Yifan Fang
- Department of Psychology, University of California, Berkeley
- Those authors contribute equally
| | - David Whitney
- Department of Psychology, University of California, Berkeley
- Department of Neuroscience, University of California, Berkeley
- Vision Science Program, University of California, Berkeley
| |
Collapse
|
3
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
4
|
Arumugam D, Ho MK, Goodman ND, Van Roy B. Bayesian Reinforcement Learning With Limited Cognitive Load. Open Mind (Camb) 2024; 8:395-438. [PMID: 38665544 PMCID: PMC11045037 DOI: 10.1162/opmi_a_00132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 02/16/2024] [Indexed: 04/28/2024] Open
Abstract
All biological and artificial agents must act given limits on their ability to acquire and process information. As such, a general theory of adaptive behavior should be able to account for the complex interactions between an agent's learning history, decisions, and capacity constraints. Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory. This body of work provides an account of capacity-limited Bayesian reinforcement learning, a unifying normative framework for modeling the effect of processing constraints on learning and action selection. Here, we provide an accessible review of recent algorithms and theoretical results in this setting, paying special attention to how these ideas can be applied to studying questions in the cognitive and behavioral sciences.
Collapse
Affiliation(s)
| | - Mark K. Ho
- Center for Data Science, New York University
| | - Noah D. Goodman
- Department of Computer Science, Stanford University
- Department of Psychology, Stanford University
| | - Benjamin Van Roy
- Department of Electrical Engineering, Stanford University
- Department of Management Science & Engineering, Stanford University
| |
Collapse
|
5
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 PMCID: PMC11551001 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
6
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Lee HJ, Lee H, Lim CY, Rhim I, Lee SH. Corrective feedback guides human perceptual decision-making by informing about the world state rather than rewarding its choice. PLoS Biol 2023; 21:e3002373. [PMID: 37939126 PMCID: PMC10659185 DOI: 10.1371/journal.pbio.3002373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 11/20/2023] [Accepted: 10/10/2023] [Indexed: 11/10/2023] Open
Abstract
Corrective feedback received on perceptual decisions is crucial for adjusting decision-making strategies to improve future choices. However, its complex interaction with other decision components, such as previous stimuli and choices, challenges a principled account of how it shapes subsequent decisions. One popular approach, based on animal behavior and extended to human perceptual decision-making, employs "reinforcement learning," a principle proven successful in reward-based decision-making. The core idea behind this approach is that decision-makers, although engaged in a perceptual task, treat corrective feedback as rewards from which they learn choice values. Here, we explore an alternative idea, which is that humans consider corrective feedback on perceptual decisions as evidence of the actual state of the world rather than as rewards for their choices. By implementing these "feedback-as-reward" and "feedback-as-evidence" hypotheses on a shared learning platform, we show that the latter outperforms the former in explaining how corrective feedback adjusts the decision-making strategy along with past stimuli and choices. Our work suggests that humans learn about what has happened in their environment rather than the values of their own choices through corrective feedback during perceptual decision-making.
Collapse
Affiliation(s)
- Hyang-Jung Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Heeseung Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Chae Young Lim
- Department of Statistics, Seoul National University, Seoul, South Korea
| | - Issac Rhim
- Institute of Neuroscience, University of Oregon, Eugene, Oregon, United States of America
| | - Sang-Hun Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| |
Collapse
|
8
|
Robinson MM, Brady TF. A quantitative model of ensemble perception as summed activation in feature space. Nat Hum Behav 2023; 7:1638-1651. [PMID: 37402880 PMCID: PMC10810262 DOI: 10.1038/s41562-023-01602-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 04/14/2023] [Indexed: 07/06/2023]
Abstract
Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations.
Collapse
Affiliation(s)
- Maria M Robinson
- Psychology Department, University of California, San Diego, La Jolla, CA, USA.
| | - Timothy F Brady
- Psychology Department, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
9
|
Cochrane A, Cox WTL, Green CS. Robust within-session modulations of IAT scores may reveal novel dynamics of rapid change. Sci Rep 2023; 13:16247. [PMID: 37758761 PMCID: PMC10533519 DOI: 10.1038/s41598-023-43370-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 09/22/2023] [Indexed: 09/29/2023] Open
Abstract
The Implicit Association Test (IAT) is employed in the domain of social psychology as a measure of implicit evaluation. Participants in this task complete blocks of trials where they are asked to respond to categories and attributes (e.g., types of faces and types of words). Reaction times in different blocks sharing certain response combinations are averaged and then subtracted from blocks with other response combinations and then normalized, the result of which is taken as a measure indicating implicit evaluation toward or away from the given categories. One assumption of this approach is stationarity of response time distributions, or at a minimum, that temporal dynamics in response times are not theoretically relevant. Here we test these assumptions, examine the extent to which response times change within the IAT blocks and, if so, how trajectories of change are meaningful in relation to external measures. Using multiple data sets we demonstrate within-session changes in IAT scores. Further, we demonstrate that dissociable components in the trajectories of IAT performance may be linked to theoretically distinct processes of cognitive biases as well as behaviors. The present work presents evidence that IAT performance changes within the task, while future work is needed to fully assess the implications of these temporal dynamics.
Collapse
Affiliation(s)
- Aaron Cochrane
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Faculty of Education and Psychological Sciences, University of Geneva, Geneva, Switzerland.
| | - William T L Cox
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
- Inequity Agents of Change, Madison, WI, USA
| | - C Shawn Green
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
10
|
Park J, Kim S, Kim HR, Lee J. Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex. SCIENCE ADVANCES 2023; 9:eadg4156. [PMID: 37418521 PMCID: PMC10328413 DOI: 10.1126/sciadv.adg4156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes.
Collapse
Affiliation(s)
- JeongJun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - HyungGoo R. Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
11
|
Camponogara I. The integration of action-oriented multisensory information from target and limb within the movement planning and execution. Neurosci Biobehav Rev 2023; 151:105228. [PMID: 37201591 DOI: 10.1016/j.neubiorev.2023.105228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 04/14/2023] [Accepted: 05/07/2023] [Indexed: 05/20/2023]
Abstract
The planning and execution of a grasping or reaching movement toward targets we sense with the other hand requires integrating multiple sources of sensory information about the limb performing the movement and the target of the action. In the last two decades, several sensory and motor control theories have thoroughly described how this multisensory-motor integration process occurs. However, even though these theories were very influential in their respective field, they lack a clear, unified vision of how target-related and movement-related multisensory information integrates within the action planning and execution phases. This brief review aims to summarize the most influential theories in multisensory integration and sensory-motor control by underscoring their critical points and hidden connections, providing new ideas on the multisensory-motor integration process. Throughout the review, I wll propose an alternative view of how the multisensory integration process unfolds along the action planning and execution and I will make several connections with the existent multisensory-motor control theories.
Collapse
Affiliation(s)
- Ivan Camponogara
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
12
|
Wang Y, Bundgaard-Nielsen RL, Baker BJ, Maxwell O. Difficulties in decoupling articulatory gestures in L2 phonemic sequences: the case of Mandarin listeners' perceptual deletion of English post-vocalic laterals. PHONETICA 2023; 0:phon-2022-0027. [PMID: 37013664 DOI: 10.1515/phon-2022-0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 03/17/2023] [Indexed: 06/19/2023]
Abstract
Nonnative or second language (L2) perception of segmental sequences is often characterised by perceptual modification processes, which may "repair" a nonnative sequence that is phonotactically illegal in the listeners' native language (L1) by transforming the sequence into a sequence that is phonotactically legal in the L1. Often repairs involve the insertion of phonetic materials (epenthesis), but we focus, here, on the less-studied phenomenon of perceptual deletion of nonnative phonemes by testing L1 Mandarin listeners' perception of post-vocalic laterals in L2 English using the triangulating methods of a cross-language goodness rating task, an AXB task, and an AX task. The data were analysed in the framework of the Perceptual Assimilation Model (PAM/PAM-L2), and we further investigated the role of L2 vocabulary size on task performance. The experiments indicate that perceptual deletion occurs when the post-vocalic lateral overlaps with the nucleus vowel in terms of tongue backness specification. In addition, Mandarin listeners' discrimination performance in some contexts was significantly correlated with their English vocabulary size, indicating that continuous growth of vocabulary knowledge can drive perceptual learning of novel L2 segmental sequences and phonotactic structures.
Collapse
Affiliation(s)
- Yizhou Wang
- School of Languages and Linguistics, The University of Melbourne, Melbourne, Australia
| | | | - Brett J Baker
- School of Languages and Linguistics, The University of Melbourne, Melbourne, Australia
| | - Olga Maxwell
- School of Languages and Linguistics, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
13
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
14
|
Domini F. The case against probabilistic inference: a new deterministic theory of 3D visual processing. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210458. [PMID: 36511407 PMCID: PMC9745883 DOI: 10.1098/rstb.2021.0458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 10/03/2022] [Indexed: 12/15/2022] Open
Abstract
How the brain derives 3D information from inherently ambiguous visual input remains the fundamental question of human vision. The past two decades of research have addressed this question as a problem of probabilistic inference, the dominant model being maximum-likelihood estimation (MLE). This model assumes that independent depth-cue modules derive noisy but statistically accurate estimates of 3D scene parameters that are combined through a weighted average. Cue weights are adjusted based on the system representation of each module's output variability. Here I demonstrate that the MLE model fails to account for important psychophysical findings and, importantly, misinterprets the just noticeable difference, a hallmark measure of stimulus discriminability, to be an estimate of perceptual uncertainty. I propose a new theory, termed Intrinsic Constraint, which postulates that the visual system does not derive the most probable interpretation of the visual input, but rather, the most stable interpretation amid variations in viewing conditions. This goal is achieved with the Vector Sum model, which represents individual cue estimates as components of a multi-dimensional vector whose norm determines the combined output. This model accounts for the psychophysical findings cited in support of MLE, while predicting existing and new findings that contradict the MLE model. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Fulvio Domini
- CLPS, Brown University, 190 Thayer Street Providence, Rhode Island 02912-9067, USA
| |
Collapse
|
15
|
Lee JL, Denison R, Ma WJ. Challenging the fixed-criterion model of perceptual decision-making. Neurosci Conscious 2023; 2023:niad010. [PMID: 37089450 PMCID: PMC10118309 DOI: 10.1093/nc/niad010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/04/2023] [Indexed: 04/25/2023] Open
Abstract
Perceptual decision-making is often conceptualized as the process of comparing an internal decision variable to a categorical boundary or criterion. How the mind sets such a criterion has been studied from at least two perspectives. One idea is that the criterion is a fixed quantity. In work on subjective phenomenology, the notion of a fixed criterion has been proposed to explain a phenomenon called "subjective inflation"-a form of metacognitive mismatch in which observers overestimate the quality of their sensory representation in the periphery or at unattended locations. A contrasting view emerging from studies of perceptual decision-making is that the criterion adjusts to the level sensory uncertainty and is thus sensitive to variations in attention. Here, we mathematically demonstrate that previous empirical findings supporting subjective inflation are consistent with either a fixed or a flexible decision criterion. We further lay out specific task properties that are necessary to make inferences about the flexibility of the criterion: (i) a clear mapping from decision variable space to stimulus feature space and (ii) an incentive for observers to adjust their decision criterion as uncertainty changes. Recent work satisfying these requirements has demonstrated that decision criteria flexibly adjust according to uncertainty. We conclude that the fixed-criterion model of subjective inflation is poorly tenable.
Collapse
Affiliation(s)
- Jennifer Laura Lee
- *Correspondence address. Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States Tel: +212 992 6530. E-mails: ;
| | - Rachel Denison
- Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02139, United States
| | - Wei Ji Ma
- *Correspondence address. Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States Tel: +212 992 6530. E-mails: ;
| |
Collapse
|
16
|
Moon J, Kwon OS. Attractive and repulsive effects of sensory history concurrently shape visual perception. BMC Biol 2022; 20:247. [PMID: 36345010 PMCID: PMC9641899 DOI: 10.1186/s12915-022-01444-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Sequential effects of environmental stimuli are ubiquitous in most behavioral tasks involving magnitude estimation, memory, decision making, and emotion. The human visual system exploits continuity in the visual environment, which induces two contrasting perceptual phenomena shaping visual perception. Previous work reported that perceptual estimation of a stimulus may be influenced either by attractive serial dependencies or repulsive aftereffects, with a number of experimental variables suggested as factors determining the direction and magnitude of sequential effects. Recent studies have theorized that these two effects concurrently arise in perceptual processing, but empirical evidence that directly supports this hypothesis is lacking, and it remains unclear whether and how attractive and repulsive sequential effects interact in a trial. Here we show that the two effects concurrently modulate estimation behavior in a typical sequence of perceptual tasks. RESULTS We first demonstrate that observers' estimation error as a function of both the previous stimulus and response cannot be fully described by either attractive or repulsive bias but is instead well captured by a summation of repulsion from the previous stimulus and attraction toward the previous response. We then reveal that the repulsive bias is centered on the observer's sensory encoding of the previous stimulus, which is again repelled away from its own preceding trial, whereas the attractive bias is centered precisely on the previous response, which is the observer's best prediction about the incoming stimuli. CONCLUSIONS Our findings provide strong evidence that sensory encoding is shaped by dynamic tuning of the system to the past stimuli, inducing repulsive aftereffects, and followed by inference incorporating the prediction from the past estimation, leading to attractive serial dependence.
Collapse
Affiliation(s)
- Jongmin Moon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, South Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Ulsan, 44919, South Korea.
| |
Collapse
|
17
|
Peters MA. Towards characterizing the canonical computations generating phenomenal experience. Neurosci Biobehav Rev 2022; 142:104903. [DOI: 10.1016/j.neubiorev.2022.104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 09/27/2022] [Accepted: 10/01/2022] [Indexed: 10/31/2022]
|
18
|
Locke SM, Landy MS, Mamassian P. Suprathreshold perceptual decisions constrain models of confidence. PLoS Comput Biol 2022; 18:e1010318. [PMID: 35895747 PMCID: PMC9359550 DOI: 10.1371/journal.pcbi.1010318] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 08/08/2022] [Accepted: 06/19/2022] [Indexed: 11/19/2022] Open
Abstract
Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that the majority of observers were best-fit by either: 1) the Heuristic model, which used dot cloud position, spread, and number of dots as cues; or 2) an Evidence-Strength model, which computed the distance between the sensory measurement and discrimination criterion, scaled according to sensory uncertainty. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our modelling. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.
Collapse
Affiliation(s)
- Shannon M. Locke
- Laboratoire des Systèmes Perceptifs, Département d’Études Cognitives, École Normale Supérieure, PSL University, CNRS, Paris, France
| | - Michael S. Landy
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d’Études Cognitives, École Normale Supérieure, PSL University, CNRS, Paris, France
| |
Collapse
|
19
|
Combination and competition between path integration and landmark navigation in the estimation of heading direction. PLoS Comput Biol 2022; 18:e1009222. [PMID: 35143474 PMCID: PMC8865642 DOI: 10.1371/journal.pcbi.1009222] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 02/23/2022] [Accepted: 01/06/2022] [Indexed: 11/19/2022] Open
Abstract
Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues. Successful navigation requires us to combine visual information about our environment with body-based cues about our own rotations and translations. In this work we investigated how these disparate sources of information work together to compute an estimate of heading. Using a novel virtual reality task we measured how humans integrate visual and body-based cues when there is mismatch between them—that is, when the estimate of heading from visual information is different from body-based cues. By building computational models of different strategies, we reveal that humans use a hybrid strategy for integrating visual and body-based cues—combining them when the mismatch between them is small and picking one or the other when the mismatch is large.
Collapse
|
20
|
Abstract
Spatial navigation is a complex cognitive activity that depends on perception, action, memory, reasoning, and problem-solving. Effective navigation depends on the ability to combine information from multiple spatial cues to estimate one's position and the locations of goals. Spatial cues include landmarks, and other visible features of the environment, and body-based cues generated by self-motion (vestibular, proprioceptive, and efferent information). A number of projects have investigated the extent to which visual cues and body-based cues are combined optimally according to statistical principles. Possible limitations of these investigations are that they have not accounted for navigators' prior experiences with or assumptions about the task environment and have not tested complete decision models. We examine cue combination in spatial navigation from a Bayesian perspective and present the fundamental principles of Bayesian decision theory. We show that a complete Bayesian decision model with an explicit loss function can explain a discrepancy between optimal cue weights and empirical cues weights observed by (Chen et al. Cognitive Psychology, 95, 105-144, 2017) and that the use of informative priors to represent cue bias can explain the incongruity between heading variability and heading direction observed by (Zhao and Warren 2015b, Psychological Science, 26[6], 915-924). We also discuss (Petzschner and Glasauer's , Journal of Neuroscience, 31(47), 17220-17229, 2011) use of priors to explain biases in estimates of linear displacements during visual path integration. We conclude that Bayesian decision theory offers a productive theoretical framework for investigating human spatial navigation and believe that it will lead to a deeper understanding of navigational behaviors.
Collapse
|
21
|
Precision control for a flexible body representation. Neurosci Biobehav Rev 2021; 134:104401. [PMID: 34736884 DOI: 10.1016/j.neubiorev.2021.10.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/24/2022]
Abstract
Adaptive body representation requires the continuous integration of multisensory inputs within a flexible 'body model' in the brain. The present review evaluates the idea that this flexibility is augmented by the contextual modulation of sensory processing 'top-down'; which can be described as precision control within predictive coding formulations of Bayesian inference. Specifically, I focus on the proposal that an attenuation of proprioception may facilitate the integration of conflicting visual and proprioceptive bodily cues. Firstly, I review empirical work suggesting that the processing of visual vs proprioceptive body position information can be contextualised 'top-down'; for instance, by adopting specific attentional task sets. Building up on this, I review research showing a similar contextualisation of visual vs proprioceptive information processing in the rubber hand illusion and in visuomotor adaptation. Together, the reviewed literature suggests that proprioception, despite its indisputable importance for body perception and action control, can be attenuated top-down (through precision control) to facilitate the contextual adaptation of the brain's body model to novel visual feedback.
Collapse
|
22
|
Yao B, Rolfs M, McLaughlin C, Isenstein EL, Guillory SB, Grosman H, Kashy DA, Foss-Feig JH, Thakkar KN. Oculomotor corollary discharge signaling is related to repetitive behavior in children with autism spectrum disorder. J Vis 2021; 21:9. [PMID: 34351395 PMCID: PMC8354038 DOI: 10.1167/jov.21.8.9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 07/08/2021] [Indexed: 12/25/2022] Open
Abstract
Corollary discharge (CD) signals are "copies" of motor signals sent to sensory regions that allow animals to adjust sensory consequences of self-generated actions. Autism spectrum disorder (ASD) is characterized by sensory and motor deficits, which may be underpinned by altered CD signaling. We evaluated oculomotor CD using the blanking task, which measures the influence of saccades on visual perception, in 30 children with ASD and 35 typically developing (TD) children. Participants were instructed to make a saccade to a visual target. Upon saccade initiation, the presaccadic target disappeared and reappeared to the left or right of the original position. Participants indicated the direction of the jump. With intact CD, participants can make accurate perceptual judgements. Otherwise, participants may use saccade landing site as a proxy of the presaccadic target and use it to inform perception. We used multilevel modeling to examine the influence of saccade landing site on trans-saccadic perceptual judgements. We found that, compared with TD participants, children with ASD were more sensitive to target displacement and less reliant on saccade landing site when spatial uncertainty of the post-saccadic target was high. This pattern was driven by ASD participants with less severe restricted and repetitive behaviors. These results suggest a relationship between altered CD signaling and core ASD symptoms.
Collapse
Affiliation(s)
- Beier Yao
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Germany
| | - Christopher McLaughlin
- Seaver Autism Center, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Emily L Isenstein
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Sylvia B Guillory
- Seaver Autism Center, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Hannah Grosman
- Seaver Autism Center, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Deborah A Kashy
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| | - Jennifer H Foss-Feig
- Seaver Autism Center, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai Hospital, New York, NY, USA
| | - Katharine N Thakkar
- Department of Psychology, Michigan State University, East Lansing, MI, USA
- Division of Psychiatry and Behavioral Medicine, Michigan State University, Grand Rapids, MI, USA
| |
Collapse
|
23
|
Abstract
How can we explain the regularities in subjective reports of human observers about their subjective visual experience of a stimulus? The present study tests whether a recent model of confidence in perceptual decisions, the weighted evidence and visibility model, can be generalized from confidence to subjective visibility. In a postmasked orientation identification task, observers reported the subjective visibility of the stimulus after each single identification response. Cognitive modelling revealed that the weighted evidence and visibility model provided a superior fit to the data compared with the standard signal detection model, the signal detection model with unsystematic noise superimposed on ratings, the postdecisional accumulation model, the two-channel model, the response-congruent evidence model, the two-dimensional Bayesian model, and the constant noise and decay model. A comparison between subjective visibility and decisional confidence revealed that visibility relied more on the strength of sensory evidence about features of the stimulus irrelevant to the identification judgment and less on evidence for the identification judgment. It is argued that at least two types of evidence are required to account for subjective visibility, one related to the identification judgment, and one related to the strength of stimulation.
Collapse
|
24
|
Prat-Carrabin A, Meyniel F, Tsodyks M, Azeredo da Silveira R. Biases and Variability from Costly Bayesian Inference. ENTROPY (BASEL, SWITZERLAND) 2021; 23:603. [PMID: 34068364 PMCID: PMC8153311 DOI: 10.3390/e23050603] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 04/22/2021] [Accepted: 04/26/2021] [Indexed: 01/17/2023]
Abstract
When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a 'resource-rational' (or 'bounded-rational') picture of seemingly irrational human cognition.
Collapse
Affiliation(s)
- Arthur Prat-Carrabin
- Department of Economics, Columbia University, New York, NY 10027, USA;
- Laboratoire de Physique de l’École Normale Supérieure, Université Paris Sciences & Lettres, Centre National de la Recherche Scientifique, 75005 Paris, France
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Commissariat à l’Energie Atomique et aux Energies Alternatives, Université Paris-Saclay, NeuroSpin Center, 91191 Gif-sur-Yvette, France;
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76000, Israel;
- The Simons Center for Systems Biology, Institute for Advanced Study, Princeton, NJ 08540, USA
| | - Rava Azeredo da Silveira
- Laboratoire de Physique de l’École Normale Supérieure, Université Paris Sciences & Lettres, Centre National de la Recherche Scientifique, 75005 Paris, France
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76000, Israel;
- Institute of Molecular and Clinical Ophthalmology Basel, 4056 Basel, Switzerland
- Faculty of Science, University of Basel, 4001 Basel, Switzerland
| |
Collapse
|
25
|
Abstract
By sharing their world, humans and other animals sustain each other. Their world gets determined over time as generations of animals act in it. Current approaches to psychological science, by contrast, start from the assumption that the world is already determined before an animal's activity. These approaches seem more concerned with uncertainty about the world than with the practical indeterminacies of the world humans and nonhuman animals experience. As human activity is making life increasingly hard for other animals, this preoccupation becomes difficult to accept. This article introduces an ecological approach to psychology to develop a view that centralizes the indeterminacies of a shared world. Specifically, it develops an open-ended notion of "affordances," the possibilities for action offered by the environment. Affordances are processes in which (a) the material world invites individual animals to participate, while (b) participation concurrently continues the material world in a particular way. From this point of view, species codetermine the world together. Several empirical and methodological implications of this view on affordances are explored. The article ends with an explanation of how an ecological perspective brings responsibility for the shared world to the heart of psychological science.
Collapse
Affiliation(s)
- Ludger van Dijk
- Centre for Philosophical Psychology, Department of Philosophy, University of Antwerp
| |
Collapse
|
26
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
27
|
Rehrig GL, Cheng M, McMahan BC, Shome R. Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:32. [PMID: 33855644 PMCID: PMC8046897 DOI: 10.1186/s41235-021-00294-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 03/23/2021] [Indexed: 11/10/2022]
Abstract
A major problem in human cognition is to understand how newly acquired information and long-standing beliefs about the environment combine to make decisions and plan behaviors. Over-dependence on long-standing beliefs may be a significant source of suboptimal decision-making in unusual circumstances. While the contribution of long-standing beliefs about the environment to search in real-world scenes is well-studied, less is known about how new evidence informs search decisions, and it is unclear whether the two sources of information are used together optimally to guide search. The present study expanded on the literature on semantic guidance in visual search by modeling a Bayesian ideal observer's use of long-standing semantic beliefs and recent experience in an active search task. The ability to adjust expectations to the task environment was simulated using the Bayesian ideal observer, and subjects' performance was compared to ideal observers that depended on prior knowledge and recent experience to varying degrees. Target locations were either congruent with scene semantics, incongruent with what would be expected from scene semantics, or random. Half of the subjects were able to learn to search for the target in incongruent locations over repeated experimental sessions when it was optimal to do so. These results suggest that searchers can learn to prioritize recent experience over knowledge of scenes in a near-optimal fashion when it is beneficial to do so, as long as the evidence from recent experience was learnable.
Collapse
Affiliation(s)
- Gwendolyn L Rehrig
- Department of Psychology, University of California, Davis, CA, 95616, USA.
| | - Michelle Cheng
- School of Social Sciences, Nanyang Technological University, Singapore, 639798, Singapore
| | - Brian C McMahan
- Department of Computer Science, Rutgers University-New Brunswick, New Brunswick, USA
| | - Rahul Shome
- Department of Computer Science, Rice University, Houston, USA
| |
Collapse
|
28
|
Fan Z, Plotto A, Bai J, Whitaker VM. Volatiles Influencing Sensory Attributes and Bayesian Modeling of the Soluble Solids-Sweetness Relationship in Strawberry. FRONTIERS IN PLANT SCIENCE 2021; 12:640704. [PMID: 33815448 PMCID: PMC8010315 DOI: 10.3389/fpls.2021.640704] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 02/01/2021] [Indexed: 05/27/2023]
Abstract
Descriptive analysis via trained sensory panels has great power to facilitate flavor improvement in fresh fruits and vegetables. When paired with an understanding of fruit volatile organic compounds, descriptive analysis can help uncover the chemical drivers of sensory attributes. In the present study, 213 strawberry samples representing 56 cultivars and advanced selections were sampled over seven seasons and subjected to both sensory descriptive and chemical analyses. Principal component analysis and K-cluster analyses of sensory data highlighted three groups of strawberry samples, with one classified as superior with high sweetness and strawberry flavor and low sourness and green flavor. Partial least square models revealed 20 sweetness-enhancing volatile organic compounds and two sweetness-reducing volatiles, many of which overlap with previous consumer sensory studies. Volatiles modulating green, sour, astringent, overripe, woody, and strawberry flavors were also identified. The relationship between soluble solids content (SSC) and sweetness was modeled with Bayesian regression, generating probabilities for sweetness levels from varying levels of soluble solids. A hierarchical Bayesian model with month effects indicated that SSC is most correlated to sweetness toward the end of the fruiting season, making this the best period to make phenotypic selections for soluble solids. Comparing effects from genotypes, harvest months, and their interactions on sensory attributes revealed that sweetness, sourness, and firmness were largely controlled by genetics. These findings help formulate a paradigm for improvement of eating quality in which sensory analyses drive the targeting of chemicals important to consumer-desired attributes, which further drive the development of genetic tools for improvement of flavor.
Collapse
Affiliation(s)
- Zhen Fan
- Horticultural Sciences Department, IFAS Gulf Coast Research and Education Center, University of Florida, Wimauma, FL, United States
| | - Anne Plotto
- Horticultural Research Laboratory, USDA-ARS, Fort Pierce, FL, United States
| | - Jinhe Bai
- Horticultural Research Laboratory, USDA-ARS, Fort Pierce, FL, United States
| | - Vance M. Whitaker
- Horticultural Sciences Department, IFAS Gulf Coast Research and Education Center, University of Florida, Wimauma, FL, United States
| |
Collapse
|
29
|
Dehaene GP, Coen-Cagli R, Pouget A. Investigating the representation of uncertainty in neuronal circuits. PLoS Comput Biol 2021; 17:e1008138. [PMID: 33577553 PMCID: PMC7880493 DOI: 10.1371/journal.pcbi.1008138] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 07/09/2020] [Indexed: 11/24/2022] Open
Abstract
Skilled behavior often displays signatures of Bayesian inference. In order for the brain to implement the required computations, neuronal activity must carry accurate information about the uncertainty of sensory inputs. Two major approaches have been proposed to study neuronal representations of uncertainty. The first one, the Bayesian decoding approach, aims primarily at decoding the posterior probability distribution of the stimulus from population activity using Bayes’ rule, and indirectly yields uncertainty estimates as a by-product. The second one, which we call the correlational approach, searches for specific features of neuronal activity (such as tuning-curve width and maximum firing-rate) which correlate with uncertainty. To compare these two approaches, we derived a new normative model of sound source localization by Interaural Time Difference (ITD), that reproduces a wealth of behavioral and neural observations. We found that several features of neuronal activity correlated with uncertainty on average, but none provided an accurate estimate of uncertainty on a trial-by-trial basis, indicating that the correlational approach may not reliably identify which aspects of neuronal responses represent uncertainty. In contrast, the Bayesian decoding approach reveals that the activity pattern of the entire population was required to reconstruct the trial-to-trial posterior distribution with Bayes’ rule. These results suggest that uncertainty is unlikely to be represented in a single feature of neuronal activity, and highlight the importance of using a Bayesian decoding approach when exploring the neural basis of uncertainty. In order to optimize their behavior, animals must continuously represent the uncertainty associated with their beliefs. Understanding the neural code for this uncertainty is a pressing and critical issue in neuroscience. Following a long tradition, some studies have investigated this code by measuring how average statistics of neural responses (like the tuning curves) correlate with uncertainty as stimulus characteristics are varied. We show that this approach can be very misleading. An alternative consists in decoding the neuronal responses to recover the posterior distribution over the encoded sensory variables and using the variance of this distribution as the measure of uncertainty. We demonstrate that this decoding approach can indeed avoid the pitfalls of the traditional approach, while leading to more accurate estimates of uncertainty.
Collapse
Affiliation(s)
- Guillaume P Dehaene
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland
| | - Ruben Coen-Cagli
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Albert Einstein College of Medicine, Bronx, Department of Systems & Computational Biology & Department of Neuroscience, New York, United States of America
| | - Alexandre Pouget
- University of Geneva, Département des neurosciences fondamentales, Geneva, Switzerland.,Gatsby Computational Neuroscience Unit, London, United Kingdom
| |
Collapse
|
30
|
Li L, Rehr R, Bruns P, Gerkmann T, Röder B. A Survey on Probabilistic Models in Human Perception and Machines. Front Robot AI 2021; 7:85. [PMID: 33501252 PMCID: PMC7805657 DOI: 10.3389/frobt.2020.00085] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 05/29/2020] [Indexed: 11/29/2022] Open
Abstract
Extracting information from noisy signals is of fundamental importance for both biological and artificial perceptual systems. To provide tractable solutions to this challenge, the fields of human perception and machine signal processing (SP) have developed powerful computational models, including Bayesian probabilistic models. However, little true integration between these fields exists in their applications of the probabilistic models for solving analogous problems, such as noise reduction, signal enhancement, and source separation. In this mini review, we briefly introduce and compare selective applications of probabilistic models in machine SP and human psychophysics. We focus on audio and audio-visual processing, using examples of speech enhancement, automatic speech recognition, audio-visual cue integration, source separation, and causal inference to illustrate the basic principles of the probabilistic approach. Our goal is to identify commonalities between probabilistic models addressing brain processes and those aiming at building intelligent machines. These commonalities could constitute the closest points for interdisciplinary convergence.
Collapse
Affiliation(s)
- Lux Li
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Robert Rehr
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Timo Gerkmann
- Signal Processing (SP), Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
31
|
Abstract
When facing ambiguous images, the brain switches between mutually exclusive interpretations, a phenomenon known as bistable perception. Despite years of research, a consensus on whether bistability is driven primarily by bottom-up or top-down mechanisms has not been achieved. Here, we adopted a Bayesian approach to reconcile these two theories. Fifty-five healthy participants were exposed to an adaptation of the Necker cube paradigm, in which we manipulated sensory evidence and prior knowledge. Manipulations of both sensory evidence and priors significantly affected the way participants perceived the Necker cube. However, we observed an interaction between the effect of the cue and the effect of the instructions, a finding that is incompatible with Bayes-optimal integration. In contrast, the data were well predicted by a circular inference model. In this model, ambiguous sensory evidence is systematically biased in the direction of current expectations, ultimately resulting in a bistable percept.
Collapse
|
32
|
The role of sensory uncertainty in simple contour integration. PLoS Comput Biol 2020; 16:e1006308. [PMID: 33253195 PMCID: PMC7728286 DOI: 10.1371/journal.pcbi.1006308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 12/10/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022] Open
Abstract
Perceptual organization is the process of grouping scene elements into whole entities. A classic example is contour integration, in which separate line segments are perceived as continuous contours. Uncertainty in such grouping arises from scene ambiguity and sensory noise. Some classic Gestalt principles of contour integration, and more broadly, of perceptual organization, have been re-framed in terms of Bayesian inference, whereby the observer computes the probability that the whole entity is present. Previous studies that proposed a Bayesian interpretation of perceptual organization, however, have ignored sensory uncertainty, despite the fact that accounting for the current level of perceptual uncertainty is one of the main signatures of Bayesian decision making. Crucially, trial-by-trial manipulation of sensory uncertainty is a key test to whether humans perform near-optimal Bayesian inference in contour integration, as opposed to using some manifestly non-Bayesian heuristic. We distinguish between these hypotheses in a simplified form of contour integration, namely judging whether two line segments separated by an occluder are collinear. We manipulate sensory uncertainty by varying retinal eccentricity. A Bayes-optimal observer would take the level of sensory uncertainty into account—in a very specific way—in deciding whether a measured offset between the line segments is due to non-collinearity or to sensory noise. We find that people deviate slightly but systematically from Bayesian optimality, while still performing “probabilistic computation” in the sense that they take into account sensory uncertainty via a heuristic rule. Our work contributes to an understanding of the role of sensory uncertainty in higher-order perception. Our percept of the world is governed not only by the sensory information we have access to, but also by the way we interpret this information. When presented with a visual scene, our visual system undergoes a process of grouping visual elements together to form coherent entities so that we can interpret the scene more readily and meaningfully. For example, when looking at a pile of autumn leaves, one can still perceive and identify a whole leaf even when it is partially covered by another leaf. While Gestalt psychologists have long described perceptual organization with a set of qualitative laws, recent studies offered a statistically-optimal—Bayesian, in statistical jargon—interpretation of this process, whereby the observer chooses the scene configuration with the highest probability given the available sensory inputs. However, these studies drew their conclusions without considering a key actor in this kind of statistically-optimal computations, that is the role of sensory uncertainty. One can easily imagine that our decision on whether two contours belong to the same leaf or different leaves is likely going to change when we move from viewing the pile of leaves at a great distance (high sensory uncertainty), to viewing very closely (low sensory uncertainty). Our study examines whether and how people incorporate uncertainty into contour integration, an elementary form of perceptual organization, by varying sensory uncertainty from trial to trial in a simple contour integration task. We found that people indeed take into account sensory uncertainty, however in a way that subtly deviates from optimal behavior.
Collapse
|
33
|
Ye R, Liu X. How the known reference weakens the visual oblique effect: a Bayesian account of cognitive improvement by cue influence. Sci Rep 2020; 10:20269. [PMID: 33219255 PMCID: PMC7680155 DOI: 10.1038/s41598-020-76911-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 11/03/2020] [Indexed: 12/02/2022] Open
Abstract
This paper investigates the influence of a known cue on the oblique effect in orientation identification and explains how subjects integrate cue information to identify target orientations. We design the psychophysical task in which subjects estimate target orientations in the presence of a known oriented reference line. For comparison the control experiments without the reference are conducted. Under Bayesian inference framework, a cue integration model is proposed to explain the perceptual improvement in the presence of the reference. The maximum likelihood estimates of the parameters of our model are obtained. In the presence of the reference, the variability and biases of identification are significantly reduced and the oblique effect of orientation identification is obviously weakened. Moreover, the identification of orientation in the vicinity of the reference line is consistently biased away from the reference line (i.e., reference repulsion). Comparing the predictions of the model with the experimental results, the Bayesian Least Squares estimator under the Variable-Precision encoding (BLS_VP) provides a better description of the experimental outcomes and captures the trade-off relationship of bias and precision of identification. Our results provide a useful step toward a better understanding of human visual perception in context of the known cues.
Collapse
Affiliation(s)
- Renyu Ye
- State Key Laboratory of Mechanics and Control of Mechanical Structures, Institute of Nano Science and Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
- School of Mathematics and Physics, Anqing Normal University, Anqing, 246133, China
| | - Xinsheng Liu
- State Key Laboratory of Mechanics and Control of Mechanical Structures, Institute of Nano Science and Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| |
Collapse
|
34
|
The misrepresentation of spatial uncertainty in visual search: Single- versus joint-distribution probability cues. Atten Percept Psychophys 2020; 83:603-623. [PMID: 33025465 DOI: 10.3758/s13414-020-02145-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2020] [Indexed: 11/08/2022]
Abstract
The present study used information theory to quantify the extent to which different spatial cues conveyed the entropy associated with the identity and location of a visual search target. Single-distribution cues reflected the probability that the target would appear at one fixed location whereas joint-distribution cues reflected the probability that the target would appear at the location where another cue (arrow) pointed. The present study used a novel demand-selection paradigm to examine the extent to which individuals explicitly preferred one type of probability cue over the other. Although both cues conveyed equal entropy, the main results suggested representation of greater target entropy for joint- than for single-distribution cues based on a comparison between predicted and observed probability cue choices across four experiments. The present findings emphasize the importance of understanding how individuals represent basic information-theoretic quantities that underlie more complex decision-theoretic processes such as Bayesian and active inference.
Collapse
|
35
|
Hebart MN, Schuck NW. Current topics in Computational Cognitive Neuroscience. Neuropsychologia 2020; 147:107621. [PMID: 32898518 DOI: 10.1016/j.neuropsychologia.2020.107621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany.
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, 14195, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, 14195, Berlin, Germany.
| |
Collapse
|
36
|
Rausch M, Zehetleitner M, Steinhauser M, Maier ME. Cognitive modelling reveals distinct electrophysiological markers of decision confidence and error monitoring. Neuroimage 2020; 218:116963. [DOI: 10.1016/j.neuroimage.2020.116963] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 05/05/2020] [Accepted: 05/14/2020] [Indexed: 12/29/2022] Open
|
37
|
Abstract
Abstract
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Collapse
Affiliation(s)
- Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
38
|
Cai MB, Shvartsman M, Wu A, Zhang H, Zhu X. Incorporating structured assumptions with probabilistic graphical models in fMRI data analysis. Neuropsychologia 2020; 144:107500. [PMID: 32433952 PMCID: PMC7387580 DOI: 10.1016/j.neuropsychologia.2020.107500] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 05/09/2020] [Accepted: 05/15/2020] [Indexed: 01/27/2023]
Abstract
With the wide adoption of functional magnetic resonance imaging (fMRI) by cognitive neuroscience researchers, large volumes of brain imaging data have been accumulated in recent years. Aggregating these data to derive scientific insights often faces the challenge that fMRI data are high-dimensional, heterogeneous across people, and noisy. These challenges demand the development of computational tools that are tailored both for the neuroscience questions and for the properties of the data. We review a few recently developed algorithms in various domains of fMRI research: fMRI in naturalistic tasks, analyzing full-brain functional connectivity, pattern classification, inferring representational similarity and modeling structured residuals. These algorithms all tackle the challenges in fMRI similarly: they start by making clear statements of assumptions about neural data and existing domain knowledge, incorporate those assumptions and domain knowledge into probabilistic graphical models, and use those models to estimate properties of interest or latent structures in the data. Such approaches can avoid erroneous findings, reduce the impact of noise, better utilize known properties of the data, and better aggregate data across groups of subjects. With these successful cases, we advocate wider adoption of explicit model construction in cognitive neuroscience. Although we focus on fMRI, the principle illustrated here is generally applicable to brain data of other modalities.
Collapse
Affiliation(s)
- Ming Bo Cai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Japan; Princeton Neuroscience Institute, Princeton University, United States.
| | | | - Anqi Wu
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, United States
| | - Hejia Zhang
- Department of Electrical Engineering, Princeton University, United States
| | - Xia Zhu
- Intel Corporation, United States
| |
Collapse
|
39
|
Kiryakova RK, Aston S, Beierholm UR, Nardini M. Bayesian transfer in a complex spatial localization task. J Vis 2020; 20:17. [PMID: 32579672 PMCID: PMC7416888 DOI: 10.1167/jov.20.6.17] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Accepted: 04/17/2020] [Indexed: 01/31/2023] Open
Abstract
Prior knowledge can help observers in various situations. Adults can simultaneously learn two location priors and integrate these with sensory information to locate hidden objects. Importantly, observers weight prior and sensory (likelihood) information differently depending on their respective reliabilities, in line with principles of Bayesian inference. Yet, there is limited evidence that observers actually perform Bayesian inference, rather than a heuristic, such as forming a look-up table. To distinguish these possibilities, we ask whether previously learned priors will be immediately integrated with a new, untrained likelihood. If observers use Bayesian principles, they should immediately put less weight on the new, less reliable, likelihood ("Bayesian transfer"). In an initial experiment, observers estimated the position of a hidden target, drawn from one of two distinct distributions, using sensory and prior information. The sensory cue consisted of dots drawn from a Gaussian distribution centered on the true location with either low, medium, or high variance; the latter introduced after block three of five to test for evidence of Bayesian transfer. Observers did not weight the cue (relative to the prior) significantly less in the high compared to medium variance condition, counter to Bayesian predictions. However, when explicitly informed of the different prior variabilities, observers placed less weight on the new high variance likelihood ("Bayesian transfer"), yet, substantially diverged from ideal. Much of this divergence can be captured by a model that weights sensory information, according only to internal noise in using the cue. These results emphasize the limits of Bayesian models in complex tasks.
Collapse
Affiliation(s)
| | - Stacey Aston
- Department of Psychology, Durham University, Durham, UK
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
40
|
Ma WJ. Bayesian Decision Models: A Primer. Neuron 2020; 104:164-175. [PMID: 31600512 DOI: 10.1016/j.neuron.2019.09.037] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/20/2019] [Accepted: 09/20/2019] [Indexed: 11/26/2022]
Abstract
To understand decision-making behavior in simple, controlled environments, Bayesian models are often useful. First, optimal behavior is always Bayesian. Second, even when behavior deviates from optimality, the Bayesian approach offers candidate models to account for suboptimalities. Third, a realist interpretation of Bayesian models opens the door to studying the neural representation of uncertainty. In this tutorial, we review the principles of Bayesian models of decision making and then focus on five case studies with exercises. We conclude with reflections and future directions.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
| |
Collapse
|
41
|
Chetverikov A, Campana G, Kristjánsson Á. Probabilistic rejection templates in visual working memory. Cognition 2020; 196:104075. [DOI: 10.1016/j.cognition.2019.104075] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 09/13/2019] [Accepted: 09/16/2019] [Indexed: 10/25/2022]
|
42
|
Fu D, Weber C, Yang G, Kerzel M, Nan W, Barros P, Wu H, Liu X, Wermter S. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective. Front Integr Neurosci 2020; 14:10. [PMID: 32174816 PMCID: PMC7056875 DOI: 10.3389/fnint.2020.00010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/11/2020] [Indexed: 11/13/2022] Open
Abstract
Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Cornelius Weber
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Guochun Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Weizhi Nan
- Department of Psychology, Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Pablo Barros
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Haiyan Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
43
|
Ho JT, Preller KH, Lenggenhager B. Neuropharmacological modulation of the aberrant bodily self through psychedelics. Neurosci Biobehav Rev 2020; 108:526-541. [DOI: 10.1016/j.neubiorev.2019.12.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 11/18/2019] [Accepted: 12/04/2019] [Indexed: 12/13/2022]
|
44
|
A neural basis of probabilistic computation in visual cortex. Nat Neurosci 2019; 23:122-129. [PMID: 31873286 DOI: 10.1038/s41593-019-0554-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Accepted: 11/06/2019] [Indexed: 11/08/2022]
Abstract
Bayesian models of behavior suggest that organisms represent uncertainty associated with sensory variables. However, the neural code of uncertainty remains elusive. A central hypothesis is that uncertainty is encoded in the population activity of cortical neurons in the form of likelihood functions. We tested this hypothesis by simultaneously recording population activity from primate visual cortex during a visual categorization task in which trial-to-trial uncertainty about stimulus orientation was relevant for the decision. We decoded the likelihood function from the trial-to-trial population activity and found that it predicted decisions better than a point estimate of orientation. This remained true when we conditioned on the true orientation, suggesting that internal fluctuations in neural activity drive behaviorally meaningful variations in the likelihood function. Our results establish the role of population-encoded likelihood functions in mediating behavior and provide a neural underpinning for Bayesian models of perception.
Collapse
|
45
|
Abstract
It has been widely asserted that humans have a "Bayesian brain." Surprisingly, however, this term has never been defined and appears to be used differently by different authors. I argue that Bayesian brain should be used to denote the realist view that brains are actual Bayesian machines and point out that there is currently no evidence for such a claim.
Collapse
Affiliation(s)
- Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA30332.
| |
Collapse
|
46
|
Human confidence judgments reflect reliability-based hierarchical integration of contextual information. Nat Commun 2019; 10:5430. [PMID: 31780659 PMCID: PMC6882790 DOI: 10.1038/s41467-019-13472-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 11/07/2019] [Indexed: 11/08/2022] Open
Abstract
Our immediate observations must be supplemented with contextual information to resolve ambiguities. However, the context is often ambiguous too, and thus it should be inferred itself to guide behavior. Here, we introduce a novel hierarchical task (airplane task) in which participants should infer a higher-level, contextual variable to inform probabilistic inference about a hidden dependent variable at a lower level. By controlling the reliability of past sensory evidence through varying the sample size of the observations, we find that humans estimate the reliability of the context and combine it with current sensory uncertainty to inform their confidence reports. Behavior closely follows inference by probabilistic message passing between latent variables across hierarchical state representations. Commonly reported inferential fallacies, such as sample size insensitivity, are not present, and neither did participants appear to rely on simple heuristics. Our results reveal uncertainty-sensitive integration of information at different hierarchical levels and temporal scales. Because our immediate observations are often ambiguous, we must use the context (prior beliefs) to guide inference, but the context may also be uncertain. Here, the authors show that humans can accurately estimate the reliability of the context and combine it with sensory uncertainty to form their decisions and estimate confidence.
Collapse
|
47
|
Kim S, Park J, Lee J. Effect of Prior Direction Expectation on the Accuracy and Precision of Smooth Pursuit Eye Movements. Front Syst Neurosci 2019; 13:71. [PMID: 32038182 PMCID: PMC6988807 DOI: 10.3389/fnsys.2019.00071] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/11/2019] [Indexed: 12/23/2022] Open
Abstract
The integration of sensory with top–down cognitive signals for generating appropriate sensory–motor behaviors is an important issue in understanding the brain’s information processes. Recent studies have demonstrated that the interplay between sensory and high-level signals in oculomotor behavior could be explained by Bayesian inference. Specifically, prior knowledge for motion speed introduces a bias in the speed of smooth pursuit eye movements. The other important prediction of Bayesian inference is variability reduction by prior expectation; however, there is insufficient evidence in oculomotor behaviors to support this prediction. In the present study, we trained monkeys to switch the prior expectation about motion direction and independently controlled the strength of the motion stimulus. Under identical sensory stimulus conditions, we tested if prior knowledge about the motion direction reduced the variability of open-loop smooth pursuit eye movements. We observed a significant reduction when the prior expectation was strong; this was consistent with the prediction of Bayesian inference. Taking advantage of the open-loop smooth pursuit, we investigated the temporal dynamics of the effect of the prior to the pursuit direction bias and variability. This analysis demonstrated that the strength of the sensory evidence depended not only on the strength of the sensory stimulus but also on the time required for the pursuit system to form a neural sensory representation. Finally, we demonstrated that the variability and directional bias change by prior knowledge were quantitatively explained by the Bayesian observer model.
Collapse
Affiliation(s)
- Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Jeongjun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| |
Collapse
|
48
|
Musall S, Urai AE, Sussillo D, Churchland AK. Harnessing behavioral diversity to understand neural computations for cognition. Curr Opin Neurobiol 2019; 58:229-238. [PMID: 31670073 PMCID: PMC6931281 DOI: 10.1016/j.conb.2019.09.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 08/28/2019] [Accepted: 09/11/2019] [Indexed: 11/28/2022]
Abstract
With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as artificial model organisms to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how a wide range of different behaviors can be implemented, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies.
Collapse
Affiliation(s)
- Simon Musall
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Anne E Urai
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - David Sussillo
- Google AI, Google, Inc., Mountain View, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA.
| |
Collapse
|
49
|
Abstract
The general lines of Bayesian modeling (BM) in the study of perception are outlined here. The main thesis argued here is that BM works well only in the so-called secondary processes of perception, and in particular in cases of imperfect discriminability between stimuli, or when a judgment is required, or in cases of multistability. In cases of “primary processes,” on the other hand, it is often arbitrary and anyway superfluous, as with the laws of Gestalt. However, it is pointed out that in these latter cases, simpler and more well-established methodologies already exist, such as signal detection theory and individual choice theory. The frequent recourse to arbitrary values of a priori probabilities is also open to question.
Collapse
|
50
|
Abstract
Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes (a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, (b) pursuit during periods of target occlusion, and (c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.
Collapse
Affiliation(s)
- Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Elio M Santos
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , , .,Current affiliation: Department of Psychology, State University of New York, College at Oneonta, Oneonta, New York 13820, USA;
| | - Jie Wang
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| |
Collapse
|