1
|
Srivastava S, Wang WY, Eckstein MP. Emergent human-like covert attention in feedforward convolutional neural networks. Curr Biol 2024; 34:579-593.e12. [PMID: 38244541 DOI: 10.1016/j.cub.2023.12.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 10/09/2023] [Accepted: 12/19/2023] [Indexed: 01/22/2024]
Abstract
Covert attention allows the selection of locations or features of the visual scene without moving the eyes. Cues and contexts predictive of a target's location orient covert attention and improve perceptual performance. The performance benefits are widely attributed to theories of covert attention as a limited resource, zoom, spotlight, or weighting of visual information. However, such concepts are difficult to map to neuronal populations. We show that a feedforward convolutional neural network (CNN) trained on images to optimize target detection accuracy and with no explicit incorporation of an attention mechanism, a limited resource, or feedback connections learns to utilize cues and contexts in the three most prominent covert attention tasks (Posner cueing, set size effects in search, and contextual cueing) and predicts the cue/context influences on human accuracy. The CNN's cueing/context effects generalize across network training schemes, to peripheral and central pre-cues, discrimination tasks, and reaction time measures, and critically do not vary with reductions in network resources (size). The CNN shows comparable cueing/context effects to a model that optimally uses image information to make decisions (Bayesian ideal observer) but generalizes these effects to cue instances unseen during training. Together, the findings suggest that human-like behavioral signatures of covert attention in the three landmark paradigms might be an emergent property of task accuracy optimization in neuronal populations without positing limited attentional resources. The findings might explain recent behavioral results showing cueing and context effects across a variety of simple organisms with no neocortex, from archerfish to fruit flies.
Collapse
Affiliation(s)
- Sudhanshu Srivastava
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| | - William Yang Wang
- Department of Computer Science, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| | - Miguel P Eckstein
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Department of Computer Science, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| |
Collapse
|
2
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
3
|
Capshaw G, Brown AD, Peña JL, Carr CE, Christensen-Dalsgaard J, Tollin DJ, Womack MC, McCullagh EA. The continued importance of comparative auditory research to modern scientific discovery. Hear Res 2023; 433:108766. [PMID: 37084504 PMCID: PMC10321136 DOI: 10.1016/j.heares.2023.108766] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/23/2023] [Accepted: 04/05/2023] [Indexed: 04/23/2023]
Abstract
A rich history of comparative research in the auditory field has afforded a synthetic view of sound information processing by ears and brains. Some organisms have proven to be powerful models for human hearing due to fundamental similarities (e.g., well-matched hearing ranges), while others feature intriguing differences (e.g., atympanic ears) that invite further study. Work across diverse "non-traditional" organisms, from small mammals to avians to amphibians and beyond, continues to propel auditory science forward, netting a variety of biomedical and technological advances along the way. In this brief review, limited primarily to tetrapod vertebrates, we discuss the continued importance of comparative studies in hearing research from the periphery to central nervous system with a focus on outstanding questions such as mechanisms for sound capture, peripheral and central processing of directional/spatial information, and non-canonical auditory processing, including efferent and hormonal effects.
Collapse
Affiliation(s)
- Grace Capshaw
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Catherine E Carr
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | | | - Daniel J Tollin
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA; Department of Otolaryngology, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA
| | - Molly C Womack
- Department of Biology, Utah State University, Logan, UT 84322, USA.
| | - Elizabeth A McCullagh
- Department of Integrative Biology, Oklahoma State University, Stillwater, OK 74078, USA.
| |
Collapse
|
4
|
Bayesian prediction of psychophysical detection responses from spike activity in the rat sensorimotor cortex. J Comput Neurosci 2023; 51:207-222. [PMID: 36696073 DOI: 10.1007/s10827-023-00844-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 12/13/2022] [Accepted: 01/13/2023] [Indexed: 01/26/2023]
Abstract
Decoding of sensorimotor information is essential for brain-computer interfaces (BCIs) as well as in normal functioning organisms. In this study, Bayesian models were developed for the prediction of binary decisions of 10 awake freely-moving male/female rats based on neural activity in a vibrotactile yes/no detection task. The vibrotactile stimuli were 40-Hz sinusoidal displacements (amplitude: 200 µm, duration: 0.5 s) applied on the glabrous skin. The task was to depress the right lever for stimulus detection and left lever for stimulus-off condition. Spike activity was recorded from 16-channel microwire arrays implanted in the hindlimb representation of primary somatosensory cortex (S1), overlapping also with the associated representation in the primary motor cortex (M1). Single-/multi-unit average spike rate (Rd) within the stimulus analysis window was used as the predictor of the stimulus state and the behavioral response at each trial based on a Bayesian network model. Due to high neural and psychophysical response variability for each rat and also across subjects, mean Rd was not correlated with hit and false alarm rates. Despite the fluctuations in the neural data, the Bayesian model for each rat generated moderately good accuracy (0.60-0.90) and good class prediction scores (recall, precision, F1) and was also tested with subsets of data (e.g. regular vs. fast spike groups). It was generally observed that the models were better for rats with lower psychophysical performance (lower sensitivity index A'). This suggests that Bayesian inference and similar machine learning techniques may be especially helpful during the training phase of BCIs or for rehabilitation with neuroprostheses.
Collapse
|
5
|
Inhibition of return as a foraging facilitator in visual search: Evidence from long-term training. Atten Percept Psychophys 2023; 85:88-98. [PMID: 36380146 DOI: 10.3758/s13414-022-02605-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2022] [Indexed: 11/16/2022]
Abstract
Inhibition of return (IOR) discourages visual attention from returning to previously attended locations, and has been theorized as a mechanism to facilitate foraging in visual search by inhibitory tagging of inspected items. Previous studies using visual search and probe-detection tasks (i.e., the probe-following-search paradigm) found longer reaction times (RTs) for probes appearing at the searched locations than probes appearing at novel locations. This IOR effect was stronger in serial than parallel search, favoring the foraging facilitator hypothesis. However, evidence for this hypothesis was still lacking because no attempt was made to study how IOR would change when search efficiency gradually improves. The current study employed the probe-following-search paradigm and long-term training to examine how IOR varied following search efficiency improvements across training days. According to the foraging facilitator hypothesis, inhibitory tagging is an after-effect of attentional engagement. Therefore, when attentional engagement in a visual search task is reduced via long-term training, the strength of inhibitory tagging decreases, thus predicting a reduced IOR effect. Consistent with this prediction, two experiments consistently showed that IOR decreased while search efficiency improved through training, although IOR reached the floor more quickly than search efficiency. These findings support the notion that IOR facilitates search performance via stronger inhibitory tagging in more difficult visual search.
Collapse
|
6
|
Lee JL, Denison R, Ma WJ. Challenging the fixed-criterion model of perceptual decision-making. Neurosci Conscious 2023; 2023:niad010. [PMID: 37089450 PMCID: PMC10118309 DOI: 10.1093/nc/niad010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/04/2023] [Indexed: 04/25/2023] Open
Abstract
Perceptual decision-making is often conceptualized as the process of comparing an internal decision variable to a categorical boundary or criterion. How the mind sets such a criterion has been studied from at least two perspectives. One idea is that the criterion is a fixed quantity. In work on subjective phenomenology, the notion of a fixed criterion has been proposed to explain a phenomenon called "subjective inflation"-a form of metacognitive mismatch in which observers overestimate the quality of their sensory representation in the periphery or at unattended locations. A contrasting view emerging from studies of perceptual decision-making is that the criterion adjusts to the level sensory uncertainty and is thus sensitive to variations in attention. Here, we mathematically demonstrate that previous empirical findings supporting subjective inflation are consistent with either a fixed or a flexible decision criterion. We further lay out specific task properties that are necessary to make inferences about the flexibility of the criterion: (i) a clear mapping from decision variable space to stimulus feature space and (ii) an incentive for observers to adjust their decision criterion as uncertainty changes. Recent work satisfying these requirements has demonstrated that decision criteria flexibly adjust according to uncertainty. We conclude that the fixed-criterion model of subjective inflation is poorly tenable.
Collapse
Affiliation(s)
- Jennifer Laura Lee
- *Correspondence address. Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States Tel: +212 992 6530. E-mails: ;
| | - Rachel Denison
- Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States
- Department of Psychological & Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA 02139, United States
| | - Wei Ji Ma
- *Correspondence address. Center for Neural Science and Department of Psychology, New York University, 4 Washington Pl, New York City, NY 10003, United States Tel: +212 992 6530. E-mails: ;
| |
Collapse
|
7
|
Bujia G, Sclar M, Vita S, Solovey G, Kamienkowski JE. Modeling Human Visual Search in Natural Scenes: A Combined Bayesian Searcher and Saliency Map Approach. Front Syst Neurosci 2022; 16:882315. [PMID: 35712044 PMCID: PMC9197262 DOI: 10.3389/fnsys.2022.882315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/26/2022] [Indexed: 11/13/2022] Open
Abstract
Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images during a free-exploring task. However, it is still challenging to predict the sequence of fixations during visual search. Bayesian observer models are particularly suited for this task because they represent visual search as an active sampling process. Nevertheless, how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes. We showed that, although state-of-the-art saliency models performed well in predicting the first two fixations in a visual search task ( 90% of the performance achieved by humans), their performance degraded to chance afterward. Therefore, saliency maps alone could model bottom-up first impressions but they were not enough to explain scanpaths when top-down task information was critical. In contrast, our model led to human-like performance and scanpaths as revealed by: first, the agreement between targets found by the model and the humans on a trial-by-trial basis; and second, the scanpath similarity between the model and the humans, that makes the behavior of the model indistinguishable from that of humans. Altogether, the combination of deep neural networks based saliency models for image processing and a Bayesian framework for scanpath integration probes to be a powerful and flexible approach to model human behavior in natural scenarios.
Collapse
Affiliation(s)
- Gaston Bujia
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
- Instituto de Cálculo, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Melanie Sclar
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Sebastian Vita
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Guillermo Solovey
- Instituto de Cálculo, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Juan Esteban Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
- Maestría de Explotación de Datos y Descubrimiento del Conocimiento, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
8
|
Zhu S, Lakshminarasimhan KJ, Arfaei N, Angelaki DE. Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation. eLife 2022; 11:e73097. [PMID: 35503099 PMCID: PMC9135400 DOI: 10.7554/elife.73097] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/01/2022] [Indexed: 11/28/2022] Open
Abstract
Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants' eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking trade-off in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.
Collapse
Affiliation(s)
- Seren Zhu
- Center for Neural Science, New York UniversityNew YorkUnited States
| | | | - Nastaran Arfaei
- Department of Psychology, New York UniversityNew YorkUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew YorkUnited States
- Department of Mechanical and Aerospace Engineering, New York UniversityNew YorkUnited States
| |
Collapse
|
9
|
Abstract
Sensory data about most natural task-relevant variables are entangled with task-irrelevant nuisance variables. The neurons that encode these relevant signals typically constitute a nonlinear population code. Here we present a theoretical framework for quantifying how the brain uses or decodes its nonlinear information. Our theory obeys fundamental mathematical limitations on information content inherited from the sensory periphery, describing redundant codes when there are many more cortical neurons than primary sensory neurons. The theory predicts that if the brain uses its nonlinear population codes optimally, then more informative patterns should be more correlated with choices. More specifically, the theory predicts a simple, easily computed quantitative relationship between fluctuating neural activity and behavioral choices that reveals the decoding efficiency. This relationship holds for optimal feedforward networks of modest complexity, when experiments are performed under natural nuisance variation. We analyze recordings from primary visual cortex of monkeys discriminating the distribution from which oriented stimuli were drawn, and find these data are consistent with the hypothesis of near-optimal nonlinear decoding.
Collapse
|
10
|
Abstract
What are the contents of working memory? In both behavioral and neural computational models, a working memory representation is typically described by a single number, namely, a point estimate of a stimulus. Here, we asked if people also maintain the uncertainty associated with a memory and if people use this uncertainty in subsequent decisions. We collected data in a two-condition orientation change detection task; while both conditions measured whether people used memory uncertainty, only one required maintaining it. For each condition, we compared an optimal Bayesian observer model, in which the observer uses an accurate representation of uncertainty in their decision, to one in which the observer does not. We find that this “Use Uncertainty” model fits better for all participants in both conditions. In the first condition, this result suggests that people use uncertainty optimally in a working memory task when that uncertainty information is available at the time of decision, confirming earlier results. Critically, the results of the second condition suggest that this uncertainty information was maintained in working memory. We test model variants and find that our conclusions do not depend on our assumptions about the observer's encoding process, inference process, or decision rule. Our results provide evidence that people have uncertainty that reflects their memory precision on an item-specific level, maintain this information over a working memory delay, and use it implicitly in a way consistent with an optimal observer. These results challenge existing computational models of working memory to update their frameworks to represent uncertainty.
Collapse
Affiliation(s)
- Aspen H Yoo
- Department of Psychology, New York University, NY, USA.,Center for Neural Science, New York University, NY, USA.,Department of Psychology, University of California, Berkeley, CA, USA.,
| | - Luigi Acerbi
- Department of Psychology, New York University, NY, USA.,Center for Neural Science, New York University, NY, USA.,Department of Computer Science, University of Helsinki, Helsinki, Finland.,
| | - Wei Ji Ma
- Department of Psychology, New York University, NY, USA.,Center for Neural Science, New York University, NY, USA.,
| |
Collapse
|
11
|
Bates CJ, Jacobs RA. Optimal attentional allocation in the presence of capacity constraints in uncued and cued visual search. J Vis 2021; 21:3. [PMID: 33944906 PMCID: PMC8107488 DOI: 10.1167/jov.21.5.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 03/09/2021] [Indexed: 11/24/2022] Open
Abstract
The vision sciences literature contains a large diversity of experimental and theoretical approaches to the study of visual attention. We argue that this diversity arises, at least in part, from the field's inability to unify differing theoretical perspectives. In particular, the field has been hindered by a lack of a principled formal framework for simultaneously thinking about both optimal attentional processing and capacity-limited attentional processing, where capacity is limited in a general, task-independent manner. Here, we supply such a framework based on rate-distortion theory (RDT) and optimal lossy compression. Our approach defines Bayes-optimal performance when an upper limit on information processing rate is imposed. In this article, we compare Bayesian and RDT accounts in both uncued and cued visual search tasks. We start by highlighting a typical shortcoming of unlimited-capacity Bayesian models that is not shared by RDT models, namely, that they often overestimate task performance when information-processing demands are increased. Next, we reexamine data from two cued-search experiments that have previously been modeled as the result of unlimited-capacity Bayesian inference and demonstrate that they can just as easily be explained as the result of optimal lossy compression. To model cued visual search, we introduce the concept of a "conditional communication channel." This simple extension generalizes the lossy-compression framework such that it can, in principle, predict optimal attentional-shift behavior in any kind of perceptual task, even when inputs to the model are raw sensory data such as image pixels. To demonstrate this idea's viability, we compare our idealized model of cued search, which operates on a simplified abstraction of the stimulus, to a deep neural network version that performs approximately optimal lossy compression on the real (pixel-level) experimental stimuli.
Collapse
Affiliation(s)
| | - Robert A Jacobs
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
12
|
Abstract
Visual search, the task of detecting or locating target items among distractor items in a visual scene, is an important function for animals and humans. Different theoretical accounts make differing predictions for the effects of distractor statistics. Here we use a task in which we parametrically vary distractor items, allowing for a simultaneously fine-grained and comprehensive study of distractor statistics. We found effects of target-distractor similarity, distractor variability, and an interaction between the two, although the effect of the interaction on performance differed from the one expected. To explain these findings, we constructed computational process models that make trial-by-trial predictions for behavior based on the stimulus presented. These models, including a Bayesian observer model, provided excellent accounts of both the qualitative and quantitative effects of distractor statistics, as well as of the effect of changing the statistics of the environment (in the form of distractors being drawn from a different distribution). We conclude with a broader discussion of the role of computational process models in the understanding of visual search.
Collapse
Affiliation(s)
- Joshua Calder-Travis
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Department of Psychology, New York University, New York, NY, USA.,
| | - Wei Ji Ma
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA.,
| |
Collapse
|
13
|
Bayesian regression explains how human participants handle parameter uncertainty. PLoS Comput Biol 2020; 16:e1007886. [PMID: 32421708 PMCID: PMC7259793 DOI: 10.1371/journal.pcbi.1007886] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Revised: 05/29/2020] [Accepted: 04/19/2020] [Indexed: 12/03/2022] Open
Abstract
Accumulating evidence indicates that the human brain copes with sensory uncertainty in accordance with Bayes’ rule. However, it is unknown how humans make predictions when the generative model of the task at hand is described by uncertain parameters. Here, we tested whether and how humans take parameter uncertainty into account in a regression task. Participants extrapolated a parabola from a limited number of noisy points, shown on a computer screen. The quadratic parameter was drawn from a bimodal prior distribution. We tested whether human observers take full advantage of the given information, including the likelihood of the quadratic parameter value given the observed points and the quadratic parameter’s prior distribution. We compared human performance with Bayesian regression, which is the (Bayes) optimal solution to this problem, and three sub-optimal models, which are simpler to compute. Our results show that, under our specific experimental conditions, humans behave in a way that is consistent with Bayesian regression. Moreover, our results support the hypothesis that humans generate responses in a manner consistent with probability matching rather than Bayesian decision theory. How do humans make prediction when the critical factor that influences the quality of the prediction is hidden? Here, we address this question by conducting a simple psychophysical experiment in which participants had to extrapolate a parabola with an unknown quadratic parameter. We show that in this task, humans perform in a manner consistent with the mathematically optimal model, i.e., Bayesian regression.
Collapse
|
14
|
Ma WJ. Bayesian Decision Models: A Primer. Neuron 2020; 104:164-175. [PMID: 31600512 DOI: 10.1016/j.neuron.2019.09.037] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/20/2019] [Accepted: 09/20/2019] [Indexed: 11/26/2022]
Abstract
To understand decision-making behavior in simple, controlled environments, Bayesian models are often useful. First, optimal behavior is always Bayesian. Second, even when behavior deviates from optimality, the Bayesian approach offers candidate models to account for suboptimalities. Third, a realist interpretation of Bayesian models opens the door to studying the neural representation of uncertainty. In this tutorial, we review the principles of Bayesian models of decision making and then focus on five case studies with exercises. We conclude with reflections and future directions.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
| |
Collapse
|
15
|
Chetverikov A, Campana G, Kristjánsson Á. Probabilistic rejection templates in visual working memory. Cognition 2020; 196:104075. [DOI: 10.1016/j.cognition.2019.104075] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 09/13/2019] [Accepted: 09/16/2019] [Indexed: 10/25/2022]
|
16
|
A neural basis of probabilistic computation in visual cortex. Nat Neurosci 2019; 23:122-129. [PMID: 31873286 DOI: 10.1038/s41593-019-0554-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Accepted: 11/06/2019] [Indexed: 11/08/2022]
Abstract
Bayesian models of behavior suggest that organisms represent uncertainty associated with sensory variables. However, the neural code of uncertainty remains elusive. A central hypothesis is that uncertainty is encoded in the population activity of cortical neurons in the form of likelihood functions. We tested this hypothesis by simultaneously recording population activity from primate visual cortex during a visual categorization task in which trial-to-trial uncertainty about stimulus orientation was relevant for the decision. We decoded the likelihood function from the trial-to-trial population activity and found that it predicted decisions better than a point estimate of orientation. This remained true when we conditioned on the true orientation, suggesting that internal fluctuations in neural activity drive behaviorally meaningful variations in the likelihood function. Our results establish the role of population-encoded likelihood functions in mediating behavior and provide a neural underpinning for Bayesian models of perception.
Collapse
|
17
|
Yiltiz H, Heeger DJ, Landy MS. Contingent adaptation in masking and surround suppression. Vision Res 2019; 166:72-80. [PMID: 31862645 DOI: 10.1016/j.visres.2019.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 11/06/2019] [Accepted: 11/08/2019] [Indexed: 10/25/2022]
Abstract
Adaptation is the process that changes a neuron's response based on recent inputs. In the traditional model, a neuron's state of adaptation depends on the recent input to that neuron alone, whereas in a recently introduced model (Hebbian normalization), adaptation depends on the structure of neural correlated firing. In particular, increased response products between pairs of neurons leads to increased mutual suppression. We test a psychophysical prediction of this model: adaptation should depend on 2nd-order statistics of input stimuli. That is, if two stimuli excite two distinct sub-populations of neurons, then presenting those stimuli simultaneously during adaptation should strengthen mutual suppression between those subpopulations. We confirm this prediction in two experiments. In the first, pairing two gratings synchronously during adaptation (i.e., a plaid) rather than asynchronously (interleaving the two gratings in time) leads to increased effectiveness of one pattern for masking the other. In the second, pairing the gratings in a center-surround configuration results in reduced apparent contrast for the central grating when paired with the same surround (as compared with a condition in which the central grating appears with a different surround at test than during adaptation). These results are consistent with the prediction that an increase in response covariance leads to greater mutual suppression between neurons. This effect is detectable both at threshold (masking) and well above threshold (apparent contrast).
Collapse
Affiliation(s)
- Hörmet Yiltiz
- Department of Psychology, New York University, New York, NY, United States
| | - David J Heeger
- Department of Psychology, New York University, New York, NY, United States; Center for Neural Science, New York University, New York, NY, United States
| | - Michael S Landy
- Department of Psychology, New York University, New York, NY, United States; Center for Neural Science, New York University, New York, NY, United States.
| |
Collapse
|
18
|
|
19
|
Fang Y, Yu Z, Liu JK, Chen F. A unified neural circuit of causal inference and multisensory integration. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
20
|
Stengård E, van den Berg R. Imperfect Bayesian inference in visual perception. PLoS Comput Biol 2019; 15:e1006465. [PMID: 30998675 PMCID: PMC6472731 DOI: 10.1371/journal.pcbi.1006465] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 03/08/2019] [Indexed: 11/24/2022] Open
Abstract
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This "imperfect Bayesian" model convincingly outperformed the "flawless Bayesian" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.
Collapse
Affiliation(s)
- Elina Stengård
- Department of Psychology, University of Uppsala, Uppsala, Sweden
| | | |
Collapse
|
21
|
Feature Distribution Learning (FDL): A New Method for Studying Visual Ensembles Perception with Priming of Attention Shifts. SPATIAL LEARNING AND ATTENTION GUIDANCE 2019. [DOI: 10.1007/7657_2019_20] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
22
|
Lau JSH, Brady TF. Ensemble statistics accessed through proxies: Range heuristic and dependence on low-level properties in variability discrimination. J Vis 2018; 18:3. [PMID: 30193345 PMCID: PMC6126932 DOI: 10.1167/18.9.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People can quickly and accurately compute not only the mean size of a set of items but also the size variability of the items. However, it remains unknown how these statistics are estimated. Here we show that neither parallel access to all items nor random subsampling of just a few items is sufficient to explain participants' estimations of size variability. In three experiments, we had participants compare two arrays of circles with different variability in their sizes. In the first two experiments, we manipulated the congruency of the range and variance of the arrays. The arrays with congruent range and variability information were judged more accurately, indicating the use of range as a proxy for variability. Experiments 2B and 3 showed that people also are not invariant to low- or mid-level visual information in the arrays, as comparing arrays with different low-level characteristics (filled vs. outlined circles) led to systematic biases. Together, these experiments indicate that range and low- or mid-level properties are both utilized as proxies for variability discrimination, and people are flexible in adopting these strategies. These strategies are at odds with the claim of parallel extraction of ensemble statistics per se and random subsampling strategies previously proposed in the literature.
Collapse
Affiliation(s)
- Jonas Sin-Heng Lau
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
23
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
24
|
Cashdollar N, Ruhnau P, Weisz N, Hasson U. The Role of Working Memory in the Probabilistic Inference of Future Sensory Events. Cereb Cortex 2018; 27:2955-2969. [PMID: 27226445 DOI: 10.1093/cercor/bhw138] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The ability to represent the emerging regularity of sensory information from the external environment has been thought to allow one to probabilistically infer future sensory occurrences and thus optimize behavior. However, the underlying neural implementation of this process is still not comprehensively understood. Through a convergence of behavioral and neurophysiological evidence, we establish that the probabilistic inference of future events is critically linked to people's ability to maintain the recent past in working memory. Magnetoencephalography recordings demonstrated that when visual stimuli occurring over an extended time series had a greater statistical regularity, individuals with higher working-memory capacity (WMC) displayed enhanced slow-wave neural oscillations in the θ frequency band (4-8 Hz.) prior to, but not during stimulus appearance. This prestimulus neural activity was specifically linked to contexts where information could be anticipated and influenced the preferential sensory processing for this visual information after its appearance. A separate behavioral study demonstrated that this process intrinsically emerges during continuous perception and underpins a realistic advantage for efficient behavioral responses. In this way, WMC optimizes the anticipation of higher level semantic concepts expected to occur in the near future.
Collapse
Affiliation(s)
- Nathan Cashdollar
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38060, Italy
| | - Philipp Ruhnau
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38060, Italy.,Division of Physiological Psychology and Centre for Cognitive Neuroscience, University of Salzburg, Salzburg A-5020, Austria
| | - Nathan Weisz
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38060, Italy.,Division of Physiological Psychology and Centre for Cognitive Neuroscience, University of Salzburg, Salzburg A-5020, Austria
| | - Uri Hasson
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38060, Italy
| |
Collapse
|
25
|
Nowakowska A, Clarke ADF, Hunt AR. Human visual search behaviour is far from ideal. Proc Biol Sci 2018; 284:rspb.2016.2767. [PMID: 28202816 DOI: 10.1098/rspb.2016.2767] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 01/18/2017] [Indexed: 11/12/2022] Open
Abstract
Evolutionary pressures have made foraging behaviours highly efficient in many species. Eye movements during search present a useful instance of foraging behaviour in humans. We tested the efficiency of eye movements during search using homogeneous and heterogeneous arrays of line segments. The search target is visible in the periphery on the homogeneous array, but requires central vision to be detected on the heterogeneous array. For a compound search array that is heterogeneous on one side and homogeneous on the other, eye movements should be directed only to the heterogeneous side. Instead, participants made many fixations on the homogeneous side. By comparing search of compound arrays to an estimate of search performance based on uniform arrays, we isolate two contributions to search inefficiency. First, participants make superfluous fixations, sacrificing speed for a perceived (but not actual) gain in response certainty. Second, participants fixate the homogeneous side even more frequently than predicted by inefficient search of uniform arrays, suggesting they also fail to direct fixations to locations that yield the most new information.
Collapse
Affiliation(s)
- Anna Nowakowska
- Department of Psychology, University of Aberdeen, Room T32, William Guild Building, King's College, Aberdeen, UK
| | - Alasdair D F Clarke
- Department of Psychology, University of Aberdeen, Room T32, William Guild Building, King's College, Aberdeen, UK.,Department of Psychology, University of Essex, Colchester, UK
| | - Amelia R Hunt
- Department of Psychology, University of Aberdeen, Room T32, William Guild Building, King's College, Aberdeen, UK
| |
Collapse
|
26
|
Yashar A, Denison RN. Feature reliability determines specificity and transfer of perceptual learning in orientation search. PLoS Comput Biol 2017; 13:e1005882. [PMID: 29240813 PMCID: PMC5746251 DOI: 10.1371/journal.pcbi.1005882] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 12/28/2017] [Accepted: 11/16/2017] [Indexed: 11/24/2022] Open
Abstract
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. Training can modify the visual system to produce improvements on perceptual tasks (visual perceptual learning), which is associated with adult brain plasticity. Visual perceptual learning has important clinical applications: it improves the vision of adults with visual deficits, e.g. amblyopia and cortical blindness, and even presbyopia (aging eye). A critical issue in visual perceptual learning is its specificity to the trained stimulus. Specificity gives insight into the processes underling experience-dependent plasticity but can be an obstacle in the development of efficient rehabilitation protocols. Under what circumstances visual perceptual learning transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: specificity in visual search depends on intrinsic variations in the reliability of feature representations; e.g., vertically oriented lines are represented in V1 with greater reliability than tilted lines. Our data and computational model suggest that training on sensory features with intrinsically low reliability can maximize the generalizability of learning, particularly in complex natural environments in which task performance is limited by low-reliability features. Our study has possible implications for the development of efficient clinical applications of perceptual learning.
Collapse
Affiliation(s)
- Amit Yashar
- Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
- The School of Psychological Sciences, Tel Aviv University, Tel-Aviv, Israel
- * E-mail:
| | - Rachel N. Denison
- Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
| |
Collapse
|
27
|
Devkar D, Wright AA, Ma WJ. Monkeys and humans take local uncertainty into account when localizing a change. J Vis 2017; 17:4. [PMID: 28877535 PMCID: PMC5588915 DOI: 10.1167/17.11.4] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an “uncertainty–indifferent” decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans.
Collapse
Affiliation(s)
- Deepna Devkar
- Department of Neurobiology & Anatomy, University of Texas Medical School, Houston, TX, USA
| | - Anthony A Wright
- Department of Neurobiology & Anatomy, University of Texas Medical School, Houston, TX, USA
| | - Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.,Present address: Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
28
|
Affiliation(s)
- Miguel P. Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California 93106-9660
| |
Collapse
|
29
|
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 2017; 8:138. [PMID: 28743932 PMCID: PMC5527101 DOI: 10.1038/s41467-017-00181-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Accepted: 06/08/2017] [Indexed: 02/01/2023] Open
Abstract
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules. Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Collapse
|
30
|
Reaction times in visual search can be explained by a simple model of neural synchronization. Neural Netw 2017; 87:1-7. [DOI: 10.1016/j.neunet.2016.12.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 12/01/2016] [Accepted: 12/02/2016] [Indexed: 11/22/2022]
|
31
|
Learning features in a complex and changing environment: A distribution-based framework for visual attention and vision in general. PROGRESS IN BRAIN RESEARCH 2017; 236:97-120. [DOI: 10.1016/bs.pbr.2017.07.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
32
|
Drugowitsch J, Wyart V, Devauchelle AD, Koechlin E. Computational Precision of Mental Inference as Critical Source of Human Choice Suboptimality. Neuron 2016; 92:1398-1411. [PMID: 27916454 DOI: 10.1016/j.neuron.2016.11.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 08/04/2016] [Accepted: 10/28/2016] [Indexed: 11/21/2022]
Affiliation(s)
- Jan Drugowitsch
- Laboratoire de Neurosciences Cognitives, Inserm unit 960, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, 75005 Paris, France; Département des Neurosciences Fondamentales, Université de Genève, CH-1211 Geneva, Switzerland; Department of Neurobiology, Harvard Medical School, Boston, MA 24615, USA.
| | - Valentin Wyart
- Laboratoire de Neurosciences Cognitives, Inserm unit 960, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, 75005 Paris, France.
| | - Anne-Dominique Devauchelle
- Laboratoire de Neurosciences Cognitives, Inserm unit 960, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, 75005 Paris, France
| | - Etienne Koechlin
- Laboratoire de Neurosciences Cognitives, Inserm unit 960, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, 75005 Paris, France
| |
Collapse
|
33
|
|
34
|
Railton P. Moral Learning: Conceptual foundations and normative relevance. Cognition 2016; 167:172-190. [PMID: 27601269 DOI: 10.1016/j.cognition.2016.08.015] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 08/06/2016] [Accepted: 08/25/2016] [Indexed: 01/01/2023]
Abstract
What is distinctive about a bringing a learning perspective to moral psychology? Part of the answer lies in the remarkable transformations that have taken place in learning theory over the past two decades, which have revealed how powerful experience-based learning can be in the acquisition of abstract causal and evaluative representations, including generative models capable of attuning perception, cognition, affect, and action to the physical and social environment. When conjoined with developments in neuroscience, these advances in learning theory permit a rethinking of fundamental questions about the acquisition of moral understanding and its role in the guidance of behavior. For example, recent research indicates that spatial learning and navigation involve the formation of non-perspectival as well as ego-centric models of the physical environment, and that spatial representations are combined with learned information about risk and reward to guide choice and potentiate further learning. Research on infants provides evidence that they form non-perspectival expected-value representations of agents and actions as well, which help them to navigate the human environment. Such representations can be formed by highly-general mental processes such as causal and empathic simulation, and thus afford a foundation for spontaneous moral learning and action that requires no innate moral faculty and can exhibit substantial autonomy with respect to community norms. If moral learning is indeed integral with the acquisition and updating of casual and evaluative models, this affords a new way of understanding well-known but seemingly puzzling patterns in intuitive moral judgment-including the notorious "trolley problems."
Collapse
Affiliation(s)
- Peter Railton
- Department of Philosophy, University of Michigan, 2215 Angell Hall, 435 South State Street, Ann Arbor, MI 48109-1003, United States.
| |
Collapse
|
35
|
Shen S, Ma WJ. A detailed comparison of optimality and simplicity in perceptual decision making. Psychol Rev 2016; 123:452-80. [PMID: 27177259 DOI: 10.1037/rev0000028] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Two prominent ideas in the study of decision making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because (a) the optimal decision rule was simple, (b) no simple suboptimal rules were considered, (c) it was unclear what was optimal, or (d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: First, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. (PsycINFO Database Record
Collapse
Affiliation(s)
- Shan Shen
- Department of Neuroscience, Baylor College of Medicine
| | | |
Collapse
|
36
|
Bhardwaj M, van den Berg R, Ma WJ, Josić K. Do People Take Stimulus Correlations into Account in Visual Search? PLoS One 2016; 11:e0149402. [PMID: 26963498 PMCID: PMC4786311 DOI: 10.1371/journal.pone.0149402] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2015] [Accepted: 02/01/2016] [Indexed: 11/19/2022] Open
Abstract
In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks.
Collapse
Affiliation(s)
- Manisha Bhardwaj
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Ronald van den Berg
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
- Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America
- Center for Neural Science and Department of Psychology, New York University, New York, New York, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
37
|
Abstract
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.
Collapse
|
38
|
Gekas N, Seitz AR, Seriès P. Expectations developed over multiple timescales facilitate visual search performance. J Vis 2015. [PMID: 26200891 DOI: 10.1167/15.9.10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level.
Collapse
|
39
|
Bhardwaj M, Carroll S, Ma WJ, Josić K. Visual Decisions in the Presence of Measurement and Stimulus Correlations. Neural Comput 2015; 27:2318-53. [PMID: 26378875 DOI: 10.1162/neco_a_00778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Humans and other animals base their decisions on noisy sensory input. Much work has been devoted to understanding the computations that underlie such decisions. The problem has been studied in a variety of tasks and with stimuli of differing complexity. However, how the statistical structure of stimuli, along with perceptual measurement noise, affects perceptual judgments is not well understood. Here we examine how correlations between the components of a stimulus-stimulus correlations-together with correlations in sensory noise, affect decision making. As an example, we consider the task of detecting the presence of a single or multiple targets among distractors. We assume that both the distractors and the observer's measurements of the stimuli are correlated. The computations of an optimal observer in this task are nontrivial yet can be analyzed and understood intuitively. We find that when distractors are strongly correlated, measurement correlations can have a strong impact on performance. When distractor correlations are weak, measurement correlations have little impact unless the number of stimuli is large. Correlations in neural responses to structured stimuli can therefore have a strong impact on perceptual judgments.
Collapse
Affiliation(s)
- Manisha Bhardwaj
- Department of Mathematics, University of Houston, Houston, TX 77004, U.S.A.
| | - Samuel Carroll
- Department of Mathematics, University of Houston, Houston, TX 77004, U.S.A.
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, NY 10003, U.S.A.
| | - Krešimir Josić
- Department of Biology and Biochemistry and Department of Mathematics, University of Houston, Houston, TX 77004, U.S.A.
| |
Collapse
|
40
|
Abstract
Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, New York 10003;
| | | |
Collapse
|
41
|
Tajima S, Komine K. Saliency-based color accessibility. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:1115-1126. [PMID: 25608304 DOI: 10.1109/tip.2015.2393056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Perception of color varies markedly between individuals because of differential expression of photopigments in retinal cones. However, it has been difficult to quantify the individual cognitive variation in colored scene and to predict its complex impacts on the behaviors. We developed a method for quantifying and visualizing information loss and gain resulting from individual differences in spectral sensitivity based on visual salience. We first modeled the visual salience for color-deficient observers, and found that the predicted losses and gains in local image salience derived from normal and color-blind models were correlated with the subjective judgment of image saliency in psychophysical experiments, i.e., saliency loss predicted reduced image preference in color-deficient observers. Moreover,saliency-guided image manipulations sufficiently compensated for individual differences in saliency. This visual saliency approach allows for quantification of information extracted from complex visual scenes and can be used as an image compensation to enhance visual accessibility by color-deficient individuals.
Collapse
|
42
|
Bayesian accounts of covert selective attention: A tutorial review. Atten Percept Psychophys 2015; 77:1013-32. [DOI: 10.3758/s13414-014-0830-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 12/27/2014] [Accepted: 12/27/2014] [Indexed: 12/16/2022]
|
43
|
Ma WJ, Shen S, Dziugaite G, van den Berg R. Requiem for the max rule? Vision Res 2015; 116:179-93. [PMID: 25584425 DOI: 10.1016/j.visres.2014.12.019] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2014] [Revised: 12/29/2014] [Accepted: 12/30/2014] [Indexed: 11/19/2022]
Abstract
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process have fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been the most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making.
Collapse
Affiliation(s)
- Wei Ji Ma
- New York University, New York, NY, United States; Baylor College of Medicine, Houston, TX, United States.
| | - Shan Shen
- Baylor College of Medicine, Houston, TX, United States
| | - Gintare Dziugaite
- Baylor College of Medicine, Houston, TX, United States; University of Cambridge, Cambridge, UK
| | - Ronald van den Berg
- Baylor College of Medicine, Houston, TX, United States; University of Cambridge, Cambridge, UK
| |
Collapse
|
44
|
Neural representation of probabilities for Bayesian inference. J Comput Neurosci 2015; 38:315-23. [PMID: 25561333 DOI: 10.1007/s10827-014-0545-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2014] [Revised: 12/07/2014] [Accepted: 12/23/2014] [Indexed: 10/24/2022]
Abstract
Bayesian models are often successful in describing perception and behavior, but the neural representation of probabilities remains in question. There are several distinct proposals for the neural representation of probabilities, but they have not been directly compared in an example system. Here we consider three models: a non-uniform population code where the stimulus-driven activity and distribution of preferred stimuli in the population represent a likelihood function and a prior, respectively; the sampling hypothesis which proposes that the stimulus-driven activity over time represents a posterior probability and that the spontaneous activity represents a prior; and the class of models which propose that a population of neurons represents a posterior probability in a distributed code. It has been shown that the non-uniform population code model matches the representation of auditory space generated in the owl's external nucleus of the inferior colliculus (ICx). However, the alternative models have not been tested, nor have the three models been directly compared in any system. Here we tested the three models in the owl's ICx. We found that spontaneous firing rate and the average stimulus-driven response of these neurons were not consistent with predictions of the sampling hypothesis. We also found that neural activity in ICx under varying levels of sensory noise did not reflect a posterior probability. On the other hand, the responses of ICx neurons were consistent with the non-uniform population code model. We further show that Bayesian inference can be implemented in the non-uniform population code model using one spike per neuron when the population is large and is thus able to support the rapid inference that is necessary for sound localization.
Collapse
|
45
|
Cutrone EK, Heeger DJ, Carrasco M. Attention enhances contrast appearance via increased input baseline of neural responses. J Vis 2014; 14:16. [PMID: 25549920 DOI: 10.1167/14.14.16] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY, USA Center for Neural Science, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
46
|
Hara Y, Gardner JL. Encoding of graded changes in spatial specificity of prior cues in human visual cortex. J Neurophysiol 2014; 112:2834-49. [PMID: 25185808 DOI: 10.1152/jn.00729.2013] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Prior information about the relevance of spatial locations can vary in specificity; a single location, a subset of locations, or all locations may be of potential importance. Using a contrast-discrimination task with four possible targets, we asked whether performance benefits are graded with the spatial specificity of a prior cue and whether we could quantitatively account for behavioral performance with cortical activity changes measured by blood oxygenation level-dependent (BOLD) imaging. Thus we changed the prior probability that each location contained the target from 100 to 50 to 25% by cueing in advance 1, 2, or 4 of the possible locations. We found that behavioral performance (discrimination thresholds) improved in a graded fashion with spatial specificity. However, concurrently measured cortical responses from retinotopically defined visual areas were not strictly graded; response magnitude decreased when all 4 locations were cued (25% prior probability) relative to the 100 and 50% prior probability conditions, but no significant difference in response magnitude was found between the 100 and 50% prior probability conditions for either cued or uncued locations. Also, although cueing locations increased responses relative to noncueing, this cue sensitivity was not graded with prior probability. Furthermore, contrast sensitivity of cortical responses, which could improve contrast discrimination performance, was not graded. Instead, an efficient-selection model showed that even if sensory responses do not strictly scale with prior probability, selection of sensory responses by weighting larger responses more can result in graded behavioral performance benefits with increasing spatial specificity of prior information.
Collapse
Affiliation(s)
- Yuko Hara
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, Wako, Saitama, Japan
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, Wako, Saitama, Japan
| |
Collapse
|
47
|
Kent C, Guest D, Adelman JS, Lamberts K. Stochastic accumulation of feature information in perception and memory. Front Psychol 2014; 5:412. [PMID: 24860530 PMCID: PMC4026707 DOI: 10.3389/fpsyg.2014.00412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Accepted: 04/19/2014] [Indexed: 11/26/2022] Open
Abstract
It is now well established that the time course of perceptual processing influences the first second or so of performance in a wide variety of cognitive tasks. Over the last 20 years, there has been a shift from modeling the speed at which a display is processed, to modeling the speed at which different features of the display are perceived and formalizing how this perceptual information is used in decision making. The first of these models (Lamberts, 1995) was implemented to fit the time course of performance in a speeded perceptual categorization task and assumed a simple stochastic accumulation of feature information. Subsequently, similar approaches have been used to model performance in a range of cognitive tasks including identification, absolute identification, perceptual matching, recognition, visual search, and word processing, again assuming a simple stochastic accumulation of feature information from both the stimulus and representations held in memory. These models are typically fit to data from signal-to-respond experiments whereby the effects of stimulus exposure duration on performance are examined, but response times (RTs) and RT distributions have also been modeled. In this article, we review this approach and explore the insights it has provided about the interplay between perceptual processing, memory retrieval, and decision making in a variety of tasks. In so doing, we highlight how such approaches can continue to usefully contribute to our understanding of cognition.
Collapse
Affiliation(s)
- Christopher Kent
- Bristol Tactile Action and Perception Lab, School of Experimental Psychology, University of BristolBristol, UK
| | - Duncan Guest
- Division of Psychology, School of Social Sciences, Nottingham Trent UniversityNottingham, UK
| | | | - Koen Lamberts
- Vice-Chancellor’s Department, University of YorkYork, UK
| |
Collapse
|
48
|
Ma WJ, Husain M, Bays PM. Changing concepts of working memory. Nat Neurosci 2014; 17:347-56. [PMID: 24569831 PMCID: PMC4159388 DOI: 10.1038/nn.3655] [Citation(s) in RCA: 596] [Impact Index Per Article: 59.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 01/23/2014] [Indexed: 01/23/2023]
Abstract
Working memory is widely considered to be limited in capacity, holding a fixed, small number of items, such as Miller's 'magical number' seven or Cowan's four. It has recently been proposed that working memory might better be conceptualized as a limited resource that is distributed flexibly among all items to be maintained in memory. According to this view, the quality rather than the quantity of working memory representations determines performance. Here we consider behavioral and emerging neural evidence for this proposal.
Collapse
Affiliation(s)
- Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, New York, USA
| | - Masud Husain
- Department of Experimental Psychology and Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Paul M Bays
- 1] Institute of Neurology, University College London, London, UK. [2] Institute of Cognitive and Brain Sciences, University of California Berkeley, Berkeley, California, USA
| |
Collapse
|
49
|
Kording KP. Bayesian statistics: relevant for the brain? Curr Opin Neurobiol 2014; 25:130-3. [PMID: 24463330 DOI: 10.1016/j.conb.2014.01.003] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 11/19/2013] [Accepted: 01/02/2014] [Indexed: 11/27/2022]
Abstract
Analyzing data from experiments involves variables that we neuroscientists are uncertain about. Efficiently calculating with such variables usually requires Bayesian statistics. As it is crucial when analyzing complex data, it seems natural that the brain would "use" such statistics to analyze data from the world. And indeed, recent studies in the areas of perception, action, and cognition suggest that Bayesian behavior is widespread, in many modalities and species. Consequently, many models have suggested that the brain is built on simple Bayesian principles. While the brain's code is probably not actually simple, I believe that Bayesian principles will facilitate the construction of faithful models of the brain.
Collapse
|
50
|
Trial-to-trial, uncertainty-based adjustment of decision boundaries in visual categorization. Proc Natl Acad Sci U S A 2013; 110:20332-7. [PMID: 24272938 DOI: 10.1073/pnas.1219756110] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Categorization is a cornerstone of perception and cognition. Computationally, categorization amounts to applying decision boundaries in the space of stimulus features. We designed a visual categorization task in which optimal performance requires observers to incorporate trial-to-trial knowledge of the level of sensory uncertainty when setting their decision boundaries. We found that humans and monkeys did adjust their decision boundaries from trial to trial as the level of sensory noise varied, with some subjects performing near optimally. We constructed a neural network that implements uncertainty-based, near-optimal adjustment of decision boundaries. Divisive normalization emerges automatically as a key neural operation in this network. Our results offer an integrated computational and mechanistic framework for categorization under uncertainty.
Collapse
|