1
|
Jeschke M, Zoeller AC, Drewing K. Humans flexibly use visual priors to optimize their haptic exploratory behavior. Sci Rep 2024; 14:14906. [PMID: 38942980 PMCID: PMC11213930 DOI: 10.1038/s41598-024-65958-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 06/25/2024] [Indexed: 06/30/2024] Open
Abstract
Humans can use prior information to optimize their haptic exploratory behavior. Here, we investigated the usage of visual priors, which mechanisms enable their usage, and how the usage is affected by information quality. Participants explored different grating textures and discriminated their spatial frequency. Visual priors on texture orientation were given each trial, with qualities randomly varying from high to no informational value. Adjustments of initial exploratory movement direction orthogonal to the textures' orientation served as an indicator of prior usage. Participants indeed used visual priors; the more so the higher the priors' quality (Experiment 1). Higher task demands did not increase the direct usage of visual priors (Experiment 2), but possibly fostered the establishment of adjustment behavior. In Experiment 3, we decreased the proportion of high-quality priors presented during the session, hereby reducing the contingency between high-quality priors and haptic information. In consequence, even priors of high quality ceased to evoke movement adjustments. We conclude that the establishment of adjustment behavior results from a rather implicit contingency learning. Overall, it became evident that humans can autonomously learn to use rather abstract visual priors to optimize haptic exploration, with the learning process and direct usage substantially depending on the priors' quality.
Collapse
Affiliation(s)
- Michaela Jeschke
- Experimental Psychology, Justus-Liebig University, 35390, Gießen, Germany.
| | - Aaron C Zoeller
- Experimental Psychology, Justus-Liebig University, 35390, Gießen, Germany
| | - Knut Drewing
- Experimental Psychology, Justus-Liebig University, 35390, Gießen, Germany
| |
Collapse
|
2
|
Cone JJ, Mitchell AO, Parker RK, Maunsell JHR. Stimulus-dependent differences in cortical versus subcortical contributions to visual detection in mice. Curr Biol 2024; 34:1940-1952.e5. [PMID: 38640924 PMCID: PMC11080572 DOI: 10.1016/j.cub.2024.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 02/08/2024] [Accepted: 03/27/2024] [Indexed: 04/21/2024]
Abstract
The primary visual cortex (V1) and the superior colliculus (SC) both occupy stations early in the processing of visual information. They have long been thought to perform distinct functions, with the V1 supporting the perception of visual features and the SC regulating orienting to visual inputs. However, growing evidence suggests that the SC supports the perception of many of the same visual features traditionally associated with the V1. To distinguish V1 and SC contributions to visual processing, it is critical to determine whether both areas causally contribute to the detection of specific visual stimuli. Here, mice reported changes in visual contrast or luminance near their perceptual threshold while white noise patterns of optogenetic stimulation were delivered to V1 or SC inhibitory neurons. We then performed a reverse correlation analysis on the optogenetic stimuli to estimate a neuronal-behavioral kernel (NBK), a moment-to-moment estimate of the impact of V1 or SC inhibition on stimulus detection. We show that the earliest moments of stimulus-evoked activity in the SC are critical for the detection of both luminance and contrast changes. Strikingly, there was a robust stimulus-aligned modulation in the V1 contrast-detection NBK but no sign of a comparable modulation for luminance detection. The data suggest that behavioral detection of visual contrast depends on both V1 and SC spiking, whereas mice preferentially use SC activity to detect changes in luminance. Electrophysiological recordings showed that neurons in both the SC and V1 responded strongly to both visual stimulus types, while the reverse correlation analysis reveals when these neuronal signals actually contribute to visually guided behaviors.
Collapse
Affiliation(s)
- Jackson J Cone
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA.
| | - Autumn O Mitchell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - Rachel K Parker
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| | - John H R Maunsell
- Department of Neurobiology and Neuroscience Institute, University of Chicago, 5812 S. Ellis Ave. MC 0912, Suite P-400, Chicago, IL 60637, USA
| |
Collapse
|
3
|
Diaw MD, Papelier S, Durand-Salmon A, Felblinger J, Oster J. A Human-Centered AI Framework for Efficient Labelling of ECGs From Drug Safety Trials. IEEE Trans Biomed Eng 2024; 71:1697-1704. [PMID: 38157467 DOI: 10.1109/tbme.2023.3348329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Drug safety trials require substantial ECG labelling like, in thorough QT studies, measurements of the QT interval, whose prolongation is a biomarker of proarrhythmic risk. The traditional method of manually measuring the QT interval is time-consuming and error-prone. Studies have demonstrated the potential of deep learning (DL)-based methods to automate this task but expert validation of these computerized measurements remains of paramount importance, particularly for abnormal ECG recordings. In this paper, we propose a highly automated framework that combines such a DL-based QT estimator with human expertise. The framework consists of 3 key components: (1) automated QT measurement with uncertainty quantification (2) expert review of a few DL-based measurements, mostly those with high model uncertainty and (3) recalibration of the unreviewed measurements based on the expert-validated data. We assess its effectiveness on 3 drug safety trials and show that it can significantly reduce effort required for ECG labelling-in our experiments only 10% of the data were reviewed per trial-while maintaining high levels of QT accuracy. Our study thus demonstrates the possibility of productive human-machine collaboration in ECG analysis without any compromise on the reliability of subsequent clinical interpretations.
Collapse
|
4
|
Tiurina NA, Markov YA, Whitney D, Pascucci D. The functional role of spatial anisotropies in ensemble perception. BMC Biol 2024; 22:28. [PMID: 38317216 PMCID: PMC10845794 DOI: 10.1186/s12915-024-01822-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/10/2024] [Indexed: 02/07/2024] Open
Abstract
BACKGROUND The human brain can rapidly represent sets of similar stimuli by their ensemble summary statistics, like the average orientation or size. Classic models assume that ensemble statistics are computed by integrating all elements with equal weight. Challenging this view, here, we show that ensemble statistics are estimated by combining parafoveal and foveal statistics in proportion to their reliability. In a series of experiments, observers reproduced the average orientation of an ensemble of stimuli under varying levels of visual uncertainty. RESULTS Ensemble statistics were affected by multiple spatial biases, in particular, a strong and persistent bias towards the center of the visual field. This bias, evident in the majority of subjects and in all experiments, scaled with uncertainty: the higher the uncertainty in the ensemble statistics, the larger the bias towards the element shown at the fovea. CONCLUSION Our findings indicate that ensemble perception cannot be explained by simple uniform pooling. The visual system weights information anisotropically from both the parafovea and the fovea, taking the intrinsic spatial anisotropies of vision into account to compensate for visual uncertainty.
Collapse
Affiliation(s)
- Natalia A Tiurina
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
- Department of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Yuri A Markov
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, USA
- Department of Psychology, University of California, Berkeley, Berkeley, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
| | - David Pascucci
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
5
|
Nikbakht N. More Than the Sum of Its Parts: Visual-Tactile Integration in the Behaving Rat. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:37-58. [PMID: 38270852 DOI: 10.1007/978-981-99-7611-9_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.
Collapse
Affiliation(s)
- Nader Nikbakht
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
6
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
7
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Mihali A, Broeker M, Ragalmuto FDM, Horga G. Introspective inference counteracts perceptual distortion. Nat Commun 2023; 14:7826. [PMID: 38030601 PMCID: PMC10687029 DOI: 10.1038/s41467-023-42813-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Introspective agents can recognize the extent to which their internal perceptual experiences deviate from the actual states of the external world. This ability, also known as insight, is critically required for reality testing and is impaired in psychosis, yet little is known about its cognitive underpinnings. We develop a Bayesian modeling framework and a psychophysics paradigm to quantitatively characterize this type of insight while people experience a motion after-effect illusion. People can incorporate knowledge about the illusion into their decisions when judging the actual direction of a motion stimulus, compensating for the illusion (and often overcompensating). Furthermore, confidence, reaction-time, and pupil-dilation data all show signatures consistent with inferential adjustments in the Bayesian insight model. Our results suggest that people can question the veracity of what they see by making insightful inferences that incorporate introspective knowledge about internal distortions.
Collapse
Affiliation(s)
- Andra Mihali
- New York State Psychiatric Institute, New York, NY, USA.
- Columbia University, Department of Psychiatry, New York, NY, USA.
| | - Marianne Broeker
- New York State Psychiatric Institute, New York, NY, USA
- Columbia University, Department of Psychiatry, New York, NY, USA
- Columbia University, Teachers College, New York, NY, USA
- University of Oxford, Department of Experimental Psychology, Oxford, UK
| | - Florian D M Ragalmuto
- New York State Psychiatric Institute, New York, NY, USA
- Columbia University, Department of Psychiatry, New York, NY, USA
- Vrije Universiteit, Faculty of Behavioral and Movement Science, Amsterdam, the Netherlands
- Berliner FortbildungsAkademie, Berlin, DE, Germany
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA.
- Columbia University, Department of Psychiatry, New York, NY, USA.
| |
Collapse
|
9
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
10
|
Fetsch CR, Noppeney U. How the brain controls decision making in a multisensory world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220332. [PMID: 37545306 PMCID: PMC10404917 DOI: 10.1098/rstb.2022.0332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 08/08/2023] Open
Abstract
Sensory systems evolved to provide the organism with information about the environment to guide adaptive behaviour. Neuroscientists and psychologists have traditionally considered each sense independently, a legacy of Aristotle and a natural consequence of their distinct physical and anatomical bases. However, from the point of view of the organism, perception and sensorimotor behaviour are fundamentally multi-modal; after all, each modality provides complementary information about the same world. Classic studies revealed much about where and how sensory signals are combined to improve performance, but these tended to treat multisensory integration as a static, passive, bottom-up process. It has become increasingly clear how this approach falls short, ignoring the interplay between perception and action, the temporal dynamics of the decision process and the many ways by which the brain can exert top-down control of integration. The goal of this issue is to highlight recent advances on these higher order aspects of multisensory processing, which together constitute a mainstay of our understanding of complex, natural behaviour and its neural basis. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, Netherlands
| |
Collapse
|
11
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
12
|
Zaidel A, Salomon R. Multisensory decisions from self to world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220335. [PMID: 37545311 PMCID: PMC10404927 DOI: 10.1098/rstb.2022.0335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 06/19/2023] [Indexed: 08/08/2023] Open
Abstract
Classic Bayesian models of perceptual inference describe how an ideal observer would integrate 'unisensory' measurements (multisensory integration) and attribute sensory signals to their origin(s) (causal inference). However, in the brain, sensory signals are always received in the context of a multisensory bodily state-namely, in combination with other senses. Moreover, sensory signals from both interoceptive sensing of one's own body and exteroceptive sensing of the world are highly interdependent and never occur in isolation. Thus, the observer must fundamentally determine whether each sensory observation is from an external (versus internal, self-generated) source to even be considered for integration. Critically, solving this primary causal inference problem requires knowledge of multisensory and sensorimotor dependencies. Thus, multisensory processing is needed to separate sensory signals. These multisensory processes enable us to simultaneously form a sense of self and form distinct perceptual decisions about the external world. In this opinion paper, we review and discuss the similarities and distinctions between multisensory decisions underlying the sense of self and those directed at acquiring information about the world. We call attention to the fact that heterogeneous multisensory processes take place all along the neural hierarchy (even in forming 'unisensory' observations) and argue that more integration of these aspects, in theory and experiment, is required to obtain a more comprehensive understanding of multisensory brain function. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
- Department of Cognitive Sciences, University of Haifa, Mount Carmel, Haifa 3498838, Israel
| |
Collapse
|
13
|
Cone JJ, Mitchell AO, Parker RK, Maunsell JHR. Temporal weighting of cortical and subcortical spikes reveals stimulus dependent differences in their contributions to behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.23.554473. [PMID: 37662213 PMCID: PMC10473714 DOI: 10.1101/2023.08.23.554473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The primary visual cortex (V1) and the superior colliculus (SC) both occupy stations early in the processing of visual information. They have long been thought to perform distinct functions, with V1 supporting perception of visual features and the SC regulating orienting to visual inputs. However, growing evidence suggests that the SC supports perception of many of the same visual features traditionally associated with V1. To distinguish V1 and SC contributions to visual processing, it is critical to determine whether both areas causally contribute to perception of specific visual stimuli. Here, mice reported changes in visual contrast or luminance near perceptual threshold while we presented white noise patterns of optogenetic stimulation to V1 or SC inhibitory neurons. We then performed a reverse correlation analysis on the optogenetic stimuli to estimate a neuronal-behavioral kernel (NBK), a moment-to-moment estimate of the impact of V1 or SC inhibition on stimulus detection. We show that the earliest moments of stimulus-evoked activity in SC are critical for detection of both luminance or contrast changes. Strikingly, there was a robust stimulus-aligned modulation in the V1 contrast-detection NBK, but no sign of a comparable modulation for luminance detection. The data suggest that perception of visual contrast depends on both V1 and SC spiking, whereas mice preferentially use SC activity to detect changes in luminance. Electrophysiological recordings showed that neurons in both SC and V1 responded strongly to both visual stimulus types, while the reverse correlation analysis reveals when these neuronal signals actually contribute to visually-guided behaviors.
Collapse
|
14
|
Otsuka T, Yotsumoto Y. Near-optimal integration of the magnitude information of time and numerosity. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230153. [PMID: 37564065 PMCID: PMC10410204 DOI: 10.1098/rsos.230153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 07/20/2023] [Indexed: 08/12/2023]
Abstract
Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g. time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e. time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum-likelihood estimation model, the participants integrated time and numerosity in a near-optimal manner; the weight of each dimension was proportional to their relative reliability, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system.
Collapse
Affiliation(s)
- Taku Otsuka
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| | - Yuko Yotsumoto
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| |
Collapse
|
15
|
Kemp JT, Cesanek E, Domini F. Perceiving depth from texture and disparity cues: Evidence for a non-probabilistic account of cue integration. J Vis 2023; 23:13. [PMID: 37486299 PMCID: PMC10382782 DOI: 10.1167/jov.23.7.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/12/2023] [Indexed: 07/25/2023] Open
Abstract
Bayesian inference theories have been extensively used to model how the brain derives three-dimensional (3D) information from ambiguous visual input. In particular, the maximum likelihood estimation (MLE) model combines estimates from multiple depth cues according to their relative reliability to produce the most probable 3D interpretation. Here, we tested an alternative theory of cue integration, termed the intrinsic constraint (IC) theory, which postulates that the visual system derives the most stable, not most probable, interpretation of the visual input amid variations in viewing conditions. The vector sum model provides a normative approach for achieving this goal where individual cue estimates are components of a multidimensional vector whose norm determines the combined estimate. Individual cue estimates are not accurate but related to distal 3D properties through a deterministic mapping. In three experiments, we show that the IC theory can more adeptly account for 3D cue integration than MLE models. In Experiment 1, we show systematic biases in the perception of depth from texture and depth from binocular disparity. Critically, we demonstrate that the vector sum model predicts an increase in perceived depth when these cues are combined. In Experiment 2, we illustrate the IC theory radical reinterpretation of the just noticeable difference (JND) and test the related vector sum model prediction of the classic finding of smaller JNDs for combined-cue versus single-cue stimuli. In Experiment 3, we confirm the vector sum prediction that biases found in cue integration experiments cannot be attributed to flatness cues, as the MLE model predicts.
Collapse
Affiliation(s)
- Jovan T Kemp
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
- Italian Institute of Technology, Rovereto, Italy
| |
Collapse
|
16
|
Domini F. The case against probabilistic inference: a new deterministic theory of 3D visual processing. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210458. [PMID: 36511407 PMCID: PMC9745883 DOI: 10.1098/rstb.2021.0458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
How the brain derives 3D information from inherently ambiguous visual input remains the fundamental question of human vision. The past two decades of research have addressed this question as a problem of probabilistic inference, the dominant model being maximum-likelihood estimation (MLE). This model assumes that independent depth-cue modules derive noisy but statistically accurate estimates of 3D scene parameters that are combined through a weighted average. Cue weights are adjusted based on the system representation of each module's output variability. Here I demonstrate that the MLE model fails to account for important psychophysical findings and, importantly, misinterprets the just noticeable difference, a hallmark measure of stimulus discriminability, to be an estimate of perceptual uncertainty. I propose a new theory, termed Intrinsic Constraint, which postulates that the visual system does not derive the most probable interpretation of the visual input, but rather, the most stable interpretation amid variations in viewing conditions. This goal is achieved with the Vector Sum model, which represents individual cue estimates as components of a multi-dimensional vector whose norm determines the combined output. This model accounts for the psychophysical findings cited in support of MLE, while predicting existing and new findings that contradict the MLE model. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Fulvio Domini
- CLPS, Brown University, 190 Thayer Street Providence, Rhode Island 02912-9067, USA
| |
Collapse
|
17
|
Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F. New Approaches to 3D Vision. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210443. [PMID: 36511413 PMCID: PMC9745878 DOI: 10.1098/rstb.2021.0443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 12/15/2022] Open
Abstract
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Michael J. Morgan
- Department of Optometry and Visual Sciences, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne & Wear NE2 4HH, UK
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| | | | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912-9067, USA
| |
Collapse
|
18
|
Jeong W, Kim S, Park J, Lee J. Multivariate EEG activity reflects the Bayesian integration and the integrated Galilean relative velocity of sensory motion during sensorimotor behavior. Commun Biol 2023; 6:113. [PMID: 36709242 PMCID: PMC9884247 DOI: 10.1038/s42003-023-04481-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/12/2023] [Indexed: 01/29/2023] Open
Abstract
Humans integrate multiple sources of information for action-taking, using the reliability of each source to allocate weight to the data. This reliability-weighted information integration is a crucial property of Bayesian inference. In this study, participants were asked to perform a smooth pursuit eye movement task in which we independently manipulated the reliability of pursuit target motion and the direction-of-motion cue. Through an analysis of pursuit initiation and multivariate electroencephalography activity, we found neural and behavioral evidence of Bayesian information integration: more attraction toward the cue direction was generated when the target motion was weak and unreliable. Furthermore, using mathematical modeling, we found that the neural signature of Bayesian information integration had extra-retinal origins, although most of the multivariate electroencephalography activity patterns during pursuit were best correlated with the retinal velocity errors accumulated over time. Our results demonstrated neural implementation of Bayesian inference in human oculomotor behavior.
Collapse
Affiliation(s)
- Woojae Jeong
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.42505.360000 0001 2156 6853Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Seolmin Kim
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| | - JeongJun Park
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.4367.60000 0001 2355 7002Division of Biology and Biomedical Sciences, Program in Neurosciences, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Joonyeol Lee
- grid.410720.00000 0004 1784 4496Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419 Republic of Korea ,grid.264381.a0000 0001 2181 989XDepartment of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon, 16419 Republic of Korea
| |
Collapse
|
19
|
Aston S, Pattie C, Graham R, Slater H, Beierholm U, Nardini M. Newly learned shape-color associations show signatures of reliability-weighted averaging without forced fusion or a memory color effect. J Vis 2022; 22:8. [PMID: 36580296 PMCID: PMC9804025 DOI: 10.1167/jov.22.13.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Reliability-weighted averaging of multiple perceptual estimates (or cues) can improve precision. Research suggests that newly learned statistical associations can be rapidly integrated in this way for efficient decision-making. Yet, it remains unclear if the integration of newly learned statistics into decision-making can directly influence perception, rather than taking place only at the decision stage. In two experiments, we implicitly taught observers novel associations between shape and color. Observers made color matches by adjusting the color of an oval to match a simultaneously presented reference. As the color of the oval changed across trials, so did its shape according to a novel mapping of axis ratio to color. Observers showed signatures of reliability-weighted averaging-a precision improvement in both experiments and reweighting of the newly learned shape cue with changes in uncertainty in Experiment 2. To ask whether this was accompanied by perceptual effects, Experiment 1 tested for forced fusion by measuring color discrimination thresholds with and without incongruent novel cues. Experiment 2 tested for a memory color effect, observers adjusting the color of ovals with different axis ratios until they appeared gray. There was no evidence for forced fusion and the opposite of a memory color effect. Overall, our results suggest that the ability to quickly learn novel cues and integrate them with familiar cues is not immediately (within the short duration of our experiments and in the domain of color and shape) accompanied by common perceptual effects.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, UK,
| | - Cat Pattie
- Biosciences Institute, Newcastle University, Newcastle, UK,
| | - Rachael Graham
- Department of Psychology, Durham University, Durham, UK,
| | - Heather Slater
- Department of Psychology, Durham University, Durham, UK,
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK,
| |
Collapse
|
20
|
How Peripheral Vestibular Damage Affects Velocity Storage: a Causative Explanation. JOURNAL OF THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY : JARO 2022; 23:551-566. [PMID: 35768706 PMCID: PMC9437187 DOI: 10.1007/s10162-022-00853-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 05/30/2022] [Indexed: 10/17/2022]
Abstract
Velocity storage is a centrally-mediated mechanism that processes peripheral vestibular inputs. One prominent aspect of velocity storage is its effect on dynamic responses to yaw rotation. Specifically, when normal human subjects are accelerated to constant angular yaw velocity, horizontal eye movements and perceived angular velocity decay exponentially with a time constant circa 15-30 s, even though the input from the vestibular periphery decays much faster (~ 6 s). Peripheral vestibular damage causes a time constant reduction, which is useful for clinical diagnoses, but a mechanistic explanation for the relationship between vestibular damage and changes in these behavioral dynamics is lacking. It has been hypothesized that Bayesian optimization determines ideal velocity storage dynamics based on statistics of vestibular noise and experienced motion. Specifically, while a longer time constant would make the central estimate of angular head velocity closer to actual head motion, it may also result in the accumulation of neural noise which simultaneously degrades precision. Thus, the brain may balance these two effects by determining the time constant that optimizes behavior. We applied a Bayesian optimal Kalman filter to determine the ideal velocity storage time constant for unilateral damage. Predicted time constants were substantially lower than normal and similar to patients. Building on our past work showing that Bayesian optimization explains age-related changes in velocity storage, we also modeled interactions between age-related hair cell loss and peripheral damage. These results provide a plausible mechanistic explanation for changes in velocity storage after peripheral damage. Results also suggested that even after peripheral damage, noise originating in the periphery or early central processing may remain relevant in neurocomputations. Overall, our findings support the hypothesis that the brain optimizes velocity storage based on the vestibular signal-to-noise ratio.
Collapse
|
21
|
Yang P, Saunders JA, Chen Z. The experience of stereoblindness does not improve use of texture for slant perception. J Vis 2022; 22:3. [PMID: 35412556 PMCID: PMC9012895 DOI: 10.1167/jov.22.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Stereopsis is an important depth cue for normal people, but a subset of people suffer from stereoblindness and cannot use binocular disparity as a cue to depth. Does this experience of stereoblindness modulate use of other depth cues? We investigated this question by comparing perception of 3D slant from texture for stereoblind people and stereo-normal people. Subjects performed slant discrimination and slant estimation tasks using both monocular and binocular stimuli. We found that two groups had comparable ability to discriminate slant from texture information and showed similar mappings between texture information and slant perception (biased perception toward frontal surface with texture information indicating low slants). The results suggest that the experience of stereoblindness did not change the use of texture information for slant perception. In addition, we found that stereoblind people benefitted from binocular viewing in the slant estimation task, despite their inability to use binocular disparity information. These findings are generally consistent with the optimal cue combination model of slant perception.
Collapse
Affiliation(s)
- Pin Yang
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,
| | | | - Zhongting Chen
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,Shanghai Changning Mental Health Center, Shanghai, China.,
| |
Collapse
|
22
|
Abstract
Spatial navigation is a complex cognitive activity that depends on perception, action, memory, reasoning, and problem-solving. Effective navigation depends on the ability to combine information from multiple spatial cues to estimate one's position and the locations of goals. Spatial cues include landmarks, and other visible features of the environment, and body-based cues generated by self-motion (vestibular, proprioceptive, and efferent information). A number of projects have investigated the extent to which visual cues and body-based cues are combined optimally according to statistical principles. Possible limitations of these investigations are that they have not accounted for navigators' prior experiences with or assumptions about the task environment and have not tested complete decision models. We examine cue combination in spatial navigation from a Bayesian perspective and present the fundamental principles of Bayesian decision theory. We show that a complete Bayesian decision model with an explicit loss function can explain a discrepancy between optimal cue weights and empirical cues weights observed by (Chen et al. Cognitive Psychology, 95, 105-144, 2017) and that the use of informative priors to represent cue bias can explain the incongruity between heading variability and heading direction observed by (Zhao and Warren 2015b, Psychological Science, 26[6], 915-924). We also discuss (Petzschner and Glasauer's , Journal of Neuroscience, 31(47), 17220-17229, 2011) use of priors to explain biases in estimates of linear displacements during visual path integration. We conclude that Bayesian decision theory offers a productive theoretical framework for investigating human spatial navigation and believe that it will lead to a deeper understanding of navigational behaviors.
Collapse
|
23
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
24
|
Newman PM, McNamara TP. Integration of visual landmark cues in spatial memory. PSYCHOLOGICAL RESEARCH 2021; 86:1636-1654. [PMID: 34420070 PMCID: PMC8380114 DOI: 10.1007/s00426-021-01581-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 08/11/2021] [Indexed: 11/25/2022]
Abstract
Over the past two decades, much research has been conducted to investigate whether humans are optimal when integrating sensory cues during spatial memory and navigational tasks. Although this work has consistently demonstrated optimal integration of visual cues (e.g., landmarks) with body-based cues (e.g., path integration) during human navigation, little work has investigated how cues of the same sensory type are integrated in spatial memory. A few recent studies have reported mixed results, with some showing very little benefit to having access to more than one landmark, and others showing that multiple landmarks can be optimally integrated in spatial memory. In the current study, we employed a combination of immersive and non-immersive virtual reality spatial memory tasks to test adult humans' ability to integrate multiple landmark cues across six experiments. Our results showed that optimal integration of multiple landmark cues depends on the difficulty of the task, and that the presence of multiple landmarks can elicit an additional latent cue when estimating locations from a ground-level perspective, but not an aerial perspective.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA.
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37212, USA
| |
Collapse
|
25
|
Redundancy between spectral and higher-order texture statistics for natural image segmentation. Vision Res 2021; 187:55-65. [PMID: 34217005 DOI: 10.1016/j.visres.2021.06.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 06/09/2021] [Accepted: 06/11/2021] [Indexed: 11/23/2022]
Abstract
Visual texture, defined by local image statistics, provides important information to the human visual system for perceptual segmentation. Second-order or spectral statistics (equivalent to the Fourier power spectrum) are a well-studied segmentation cue. However, the role of higher-order statistics (HOS) in segmentation remains unclear, particularly for natural images. Recent experiments indicate that, in peripheral vision, the HOS of the widely adopted Portilla-Simoncelli texture model are a weak segmentation cue compared to spectral statistics, despite the fact that both are necessary to explain other perceptual phenomena and to support high-quality texture synthesis. Here we test whether this discrepancy reflects a property of natural image statistics. First, we observe that differences in spectral statistics across segments of natural images are redundant with differences in HOS. Second, using linear and nonlinear classifiers, we show that each set of statistics individually affords high performance in natural scenes and texture segmentation tasks, but combining spectral statistics and HOS produces relatively small improvements. Third, we find that HOS improve segmentation for a subset of images, although these images are difficult to identify. We also find that different subsets of HOS improve segmentation to a different extent, in agreement with previous physiological and perceptual work. These results show that the HOS add modestly to spectral statistics for natural image segmentation. We speculate that tuning to natural image statistics under resource constraints could explain the weak contribution of HOS to perceptual segmentation in human peripheral vision.
Collapse
|
26
|
Gardner MH, Uffing E, Van Vaeck N, Szmrecsanyi B. Variation isn't that hard: Morphosyntactic choice does not predict production difficulty. PLoS One 2021; 16:e0252602. [PMID: 34153033 PMCID: PMC8216537 DOI: 10.1371/journal.pone.0252602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Accepted: 05/18/2021] [Indexed: 11/19/2022] Open
Abstract
The following paper explores the link between production difficulty and grammatical variability. Using a sub-sample of the Switchboard Corpus of American English (285 transcripts, 34 speakers), this paper shows that the presence of variable contexts does not positively correlate with two metrics of production difficulty, namely filled pauses (um and uh) and unfilled pauses (speech planning time). When 20 morphosyntactic variables are considered collectively (N= 6,268), there is no positive effect. In other words, variable contexts do not correlate with measurable production difficulties. These results challenge the view that grammatical variability is somehow sub-optimal for speakers, with additional burdensome cognitive planning.
Collapse
Affiliation(s)
| | - Eva Uffing
- Department of Linguistics, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
27
|
Saini H, Jordan H, Fallah M. Color Modulates Feature Integration. Front Psychol 2021; 12:680558. [PMID: 34177733 PMCID: PMC8226161 DOI: 10.3389/fpsyg.2021.680558] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 05/19/2021] [Indexed: 11/21/2022] Open
Abstract
Bayesian models of object recognition propose the resolution of ambiguity through probabilistic integration of prior experience with available sensory information. Color, even when task-irrelevant, has been shown to modulate high-level cognitive control tasks. However, it remains unclear how color modulations affect lower-level perceptual processing. We investigated whether color affects feature integration using the flash-jump illusion. This illusion occurs when an apparent motion stimulus, a rectangular bar appearing at different locations along a motion trajectory, changes color at a single position. Observers misperceive this color change as occurring farther along the trajectory of motion. This mislocalization error is proposed to be produced by a Bayesian perceptual framework dependent on responses in area V4. Our results demonstrated that the color of the flash modulated the magnitude of the flash-jump illusion such that participants reported less of a shift, i.e., a more veridical flash location, for both red and blue flashes, as compared to green and yellow. Our findings extend color-dependent modulation effects found in higher-order executive functions into lower-level Bayesian perceptual processes. Our results also support the theory that feature integration is a Bayesian process. In this framework, color modulations play an inherent and automatic role as different colors have different weights in Bayesian perceptual processing.
Collapse
Affiliation(s)
- Harpreet Saini
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
| | - Heather Jordan
- Centre for Vision Research, York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Mazyar Fallah
- Department of Biology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Application (VISTA), York University, Toronto, ON, Canada
- School of Kinesiology and Health Science, York University, Toronto, ON, Canada
- Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
28
|
Kang YH, Löffler A, Jeurissen D, Zylberberg A, Wolpert DM, Shadlen MN. Multiple decisions about one object involve parallel sensory acquisition but time-multiplexed evidence incorporation. eLife 2021; 10:63721. [PMID: 33688829 PMCID: PMC8112870 DOI: 10.7554/elife.63721] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 03/06/2021] [Indexed: 01/31/2023] Open
Abstract
The brain is capable of processing several streams of information that bear on different aspects of the same problem. Here, we address the problem of making two decisions about one object, by studying difficult perceptual decisions about the color and motion of a dynamic random dot display. We find that the accuracy of one decision is unaffected by the difficulty of the other decision. However, the response times reveal that the two decisions do not form simultaneously. We show that both stimulus dimensions are acquired in parallel for the initial ∼0.1 s but are then incorporated serially in time-multiplexed bouts. Thus, there is a bottleneck that precludes updating more than one decision at a time, and a buffer that stores samples of evidence while access to the decision is blocked. We suggest that this bottleneck is responsible for the long timescales of many cognitive operations framed as decisions.
Collapse
Affiliation(s)
- Yul Hr Kang
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States.,Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Anne Löffler
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute for Brain Science, Columbia University, New York, United States
| | - Danique Jeurissen
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States.,Howard Hughes Medical Institute, Columbia University, New York, United States
| | - Ariel Zylberberg
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States.,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Daniel M Wolpert
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States
| | - Michael N Shadlen
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute for Brain Science, Columbia University, New York, United States.,Howard Hughes Medical Institute, Columbia University, New York, United States
| |
Collapse
|
29
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
30
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
31
|
Halperin O, Karni R, Israeli-Korn S, Hassin-Baer S, Zaidel A. Overconfidence in visual perception in parkinson's disease. Eur J Neurosci 2021; 53:2027-2039. [PMID: 33368717 DOI: 10.1111/ejn.15093] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 12/04/2020] [Accepted: 12/18/2020] [Indexed: 01/23/2023]
Abstract
Increased dependence on visual cues in Parkinson's disease (PD) can unbalance the perception-action loop, impair multisensory integration, and affect everyday function of PD patients. It is currently unknown why PD patients seem to be more reliant on their visual cues. We hypothesized that PD patients may be overconfident in the reliability (precision) of their visual cues. In this study we tested coherent visual motion perception in PD, and probed subjective (self-reported) confidence in their visual motion perception. Twenty patients with idiopathic PD, 21 healthy aged-matched controls and 20 healthy young adult participants were presented with visual stimuli of moving dots (random dot kinematograms). They were asked to report: (1) whether the aggregate motion of dots was to the left or to the right, and (2) how confident they were that their perceptual discrimination was correct. Visual motion discrimination thresholds were similar (unimpaired) in PD compared to the other groups. By contrast, PD patients were significantly overconfident in their visual perceptual decisions (p = .002 and p < .001 vs. the age-matched and young adult groups, respectively). These results suggest intact visual motion perception, but overestimation of visual cue reliability, in PD. Overconfidence in visual (vs. other, e.g., somatosensory) cues could underlie increased visual dependence and impaired multisensory/sensorimotor integration in PD. It could thereby contribute to gait and balance impairments, and affect everyday activities, such as driving. Future work should investigate and compare PD confidence in somatosensory function. A better understanding of altered sensory reliance might open up new avenues to treat debilitating PD symptoms.
Collapse
Affiliation(s)
- Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Roie Karni
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Simon Israeli-Korn
- Movement Disorders Institute and the Department of Neurology, Sheba Medical Center, Tel Hashomer, Ramat Gan, Israel.,The Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Sharon Hassin-Baer
- Movement Disorders Institute and the Department of Neurology, Sheba Medical Center, Tel Hashomer, Ramat Gan, Israel.,The Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| |
Collapse
|
32
|
Beierholm U, Rohe T, Ferrari A, Stegle O, Noppeney U. Using the past to estimate sensory uncertainty. eLife 2020; 9:54172. [PMID: 33319749 PMCID: PMC7806269 DOI: 10.7554/elife.54172] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 12/13/2020] [Indexed: 01/14/2023] Open
Abstract
To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
Collapse
Affiliation(s)
- Ulrik Beierholm
- Psychology Department, Durham University, Durham, United Kingdom
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nuernberg, Erlangen, Germany
| | - Ambra Ferrari
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
| | - Oliver Stegle
- Max Planck Institute for Intelligent Systems, Tübingen, Germany.,European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.,Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany, Heidelberg, Germany
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
33
|
Gao Z, Zhai G, Yang X. Stereoscopic 3D geometric distortions analyzed from the viewer's point of view. PLoS One 2020; 15:e0240661. [PMID: 33057363 PMCID: PMC7561172 DOI: 10.1371/journal.pone.0240661] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 09/30/2020] [Indexed: 12/04/2022] Open
Abstract
Stereoscopic 3D (S3D) geometric distortions can be introduced by mismatches among image capture, display, and viewing configurations. In previous work of S3D geometric models, geometric distortions have been analyzed from a third-person perspective based on the binocular depth cue (i.e., binocular disparity). A third-person perspective is different from what the viewer sees since monocular depth cues (e.g., linear perspective, occlusion, and shadows) from different perspectives are different. However, depth perception in a 3D space involves both monocular and binocular depth cues. Geometric distortions that are solely predicted by the binocular depth cue cannot describe what a viewer really perceives. In this paper, we combine geometric models and retinal disparity models to analyze geometric distortions from the viewer's perspective where both monocular and binocular depth cues are considered. Results show that binocular and monocular depth-cue conflicts in a geometrically distorted S3D space. Moreover, user-initiated head translations averting from the optimal viewing position in conventional S3D displays can also introduce geometric distortions, which are inconsistent with our natural 3D viewing condition. The inconsistency of depth cues in a dynamic scene may be a source of visually induced motions sickness.
Collapse
Affiliation(s)
- Zhongpai Gao
- Artificial intelligence institute, Shanghai Jiao Tong University, Shanghai, China
| | - Guangtao Zhai
- Artificial intelligence institute, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaokang Yang
- Artificial intelligence institute, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
34
|
Abstract
Mobile organisms make use of spatial cues to navigate effectively in the world, such as visual and self-motion cues. Over the past decade, researchers have investigated how human navigators combine spatial cues, and whether cue combination is optimal according to statistical principles, by varying the number of cues available in homing tasks. The methodological approaches employed by researchers have varied, however. One important methodological difference exists in the number of cues available to the navigator during the outbound path for single-cue trials. In some studies, navigators have access to all spatial cues on the outbound path and all but one cue is eliminated prior to execution of the return path in the single-cue conditions; in other studies, navigators only have access to one spatial cue on the outbound and return paths in the single-cue conditions. If navigators can integrate cues along the outbound path, single-cue estimates may be contaminated by the undesired cue, which will in turn affect the predictions of models of optimal cue integration. In the current experiment, we manipulated the number of cues available during the outbound path for single-cue trials, while keeping dual-cue trials constant. This variable did not affect performance in the homing task; in particular, homing performance was better in dual-cue conditions than in single-cue conditions and was statistically optimal. Both methodological approaches to measuring spatial cue integration during navigation are appropriate.
Collapse
|
35
|
Yakubovich S, Israeli-Korn S, Halperin O, Yahalom G, Hassin-Baer S, Zaidel A. Visual self-motion cues are impaired yet overweighted during visual-vestibular integration in Parkinson's disease. Brain Commun 2020; 2:fcaa035. [PMID: 32954293 PMCID: PMC7425426 DOI: 10.1093/braincomms/fcaa035] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 02/17/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022] Open
Abstract
Parkinson's disease is prototypically a movement disorder. Although perceptual and motor functions are highly interdependent, much less is known about perceptual deficits in Parkinson's disease, which are less observable by nature, and might go unnoticed if not tested directly. It is therefore imperative to seek and identify these, to fully understand the challenges facing patients with Parkinson's disease. Also, perceptual deficits may be related to motor symptoms. Posture, gait and balance, affected in Parkinson's disease, rely on veridical perception of one's own motion (self-motion) in space. Yet it is not known whether self-motion perception is impaired in Parkinson's disease. Using a well-established multisensory paradigm of heading discrimination (that has not been previously applied to Parkinson's disease), we tested unisensory visual and vestibular self-motion perception, as well as multisensory integration of visual and vestibular cues, in 19 Parkinson's disease, 23 healthy age-matched and 20 healthy young-adult participants. After experiencing vestibular (on a motion platform), visual (optic flow) or multisensory (combined visual-vestibular) self-motion stimuli at various headings, participants reported whether their perceived heading was to the right or left of straight ahead. Parkinson's disease participants and age-matched controls were tested twice (Parkinson's disease participants on and off medication). Parkinson's disease participants demonstrated significantly impaired visual self-motion perception compared with age-matched controls on both visits, irrespective of medication status. Young controls performed slightly (but not significantly) better than age-matched controls and significantly better than the Parkinson's disease group. The visual self-motion perception impairment in Parkinson's disease correlated significantly with clinical disease severity. By contrast, vestibular performance was unimpaired in Parkinson's disease. Remarkably, despite impaired visual self-motion perception, Parkinson's disease participants significantly overweighted the visual cues during multisensory (visual-vestibular ) integration (compared with Bayesian predictions of optimal integration) and significantly more than controls. These findings indicate that self-motion perception in Parkinson's disease is affected by impaired visual cues and by suboptimal visual-vestibular integration (overweighting of visual cues). Notably, vestibular self-motion perception was unimpaired. Thus, visual self-motion perception is specifically impaired in early-stage Parkinson's disease. This can impact Parkinson's disease diagnosis and subtyping. Overweighting of visual cues could reflect a general multisensory integration deficit in Parkinson's disease, or specific overestimation of visual cue reliability. Finally, impaired self-motion perception in Parkinson's disease may contribute to impaired balance and gait control. Future investigation into this connection might open up new avenues of alternative therapies to better treat these difficult symptoms.
Collapse
Affiliation(s)
- Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Simon Israeli-Korn
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Orly Halperin
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Gilad Yahalom
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- Department of Neurology, Movement Disorders Clinic, Shaare Zedek Medical Center, Jerusalem 9103102, Israel
| | - Sharon Hassin-Baer
- Department of Neurology, Movement Disorders Institute, Sheba Medical Center, Tel Hashomer, Ramat Gan 5266202, Israel
- The Neurology and Neurosurgery Department, The Sackler School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| |
Collapse
|
36
|
Halperin O, Israeli‐Korn S, Yakubovich S, Hassin‐Baer S, Zaidel A. Self‐motion perception in Parkinson's disease. Eur J Neurosci 2020; 53:2376-2387. [DOI: 10.1111/ejn.14716] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 02/23/2020] [Accepted: 02/24/2020] [Indexed: 12/11/2022]
Affiliation(s)
- Orly Halperin
- Gonda Multidisciplinary Brain Research Center Bar Ilan University Ramat Gan Israel
| | - Simon Israeli‐Korn
- Department of Neurology Movement Disorders Institute Sheba Medical Center Ramat Gan Israel
- The Sackler School of Medicine Tel Aviv University Tel Aviv Israel
| | - Sol Yakubovich
- Gonda Multidisciplinary Brain Research Center Bar Ilan University Ramat Gan Israel
| | - Sharon Hassin‐Baer
- Department of Neurology Movement Disorders Institute Sheba Medical Center Ramat Gan Israel
- The Sackler School of Medicine Tel Aviv University Tel Aviv Israel
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center Bar Ilan University Ramat Gan Israel
| |
Collapse
|
37
|
Martinez J, Baca J, King SA. Towards predicting sensorimotor disorders in older adults via Bayesian probabilistic theory and mixed reality. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-019-1666-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
38
|
Avraham G, Sulimani E, Mussa-Ivaldi FA, Nisky I. Effects of visuomotor delays on the control of movement and on perceptual localization in the presence and absence of visual targets. J Neurophysiol 2019; 122:2259-2271. [PMID: 31577532 DOI: 10.1152/jn.00017.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The sensory system constantly deals with delayed feedback. Recent studies showed that playing a virtual game of pong with delayed feedback caused hypermetric reaching movements. We investigated whether this effect is associated with a perceptual bias. In addition, we examined the importance of the target in causing hypermetric movements. In a first experiment, participants played a delayed pong game and blindly reached to presented targets. Following each reaching movement, they assessed the position of the invisible cursor. We found that participants performed hypermetric movements but reported that the invisible cursor reached the target, suggesting that they were unaware of the hypermetria and that their perception was biased toward the target rather than toward their hand position. In a second experiment, we removed the visual target, and strikingly, the hypermetria vanished. Moreover, participants reported that the invisible cursor was located with their hand. Taking these results together, we conclude that the adaptation to the visuomotor delay during the pong game selectively affected the execution of goal directed movements, resulting in hypermetria and perceptual bias when movements are directed toward visual targets but not when such targets are absent.NEW & NOTEWORTHY Recent studies showed that adaptation to visuomotor delays causes hypermetric movements in the absence of visual feedback, suggesting that visuomotor delay is represented using current state information. We report that this adaptation also affects perception. Importantly, both the motor and perceptual effects are selective to the representations that are used in the execution of goal-directed movements toward visual targets.
Collapse
Affiliation(s)
- Guy Avraham
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Be'er-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva, Israel.,Department of Psychology, University of California, Berkeley, Berkeley, California.,Helen Wills Neuroscience Institute, University of California, Berkeley, California
| | - Erez Sulimani
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Be'er-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
| | - Ferdinando A Mussa-Ivaldi
- Shirley Ryan AbilityLab, Chicago, Illinois.,Department of Biomedical Engineering, Northwestern University, Evanston, Illinois
| | - Ilana Nisky
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Be'er-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
| |
Collapse
|
39
|
Bejjanki VR, Randrup ER, Aslin RN. Young children combine sensory cues with learned information in a statistically efficient manner: But task complexity matters. Dev Sci 2019; 23:e12912. [PMID: 31608526 DOI: 10.1111/desc.12912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 07/31/2019] [Accepted: 10/08/2019] [Indexed: 11/29/2022]
Abstract
Human adults are adept at mitigating the influence of sensory uncertainty on task performance by integrating sensory cues with learned prior information, in a Bayes-optimal fashion. Previous research has shown that young children and infants are sensitive to environmental regularities, and that the ability to learn and use such regularities is involved in the development of several cognitive abilities. However, it has also been reported that children younger than 8 do not combine simultaneously available sensory cues in a Bayes-optimal fashion. Thus, it remains unclear whether, and by what age, children can combine sensory cues with learned regularities in an adult manner. Here, we examine the performance of 6- to 7-year-old children when tasked with localizing a 'hidden' target by combining uncertain sensory information with prior information learned over repeated exposure to the task. We demonstrate that 6- to 7-year-olds learn task-relevant statistics at a rate on par with adults, and like adults, are capable of integrating learned regularities with sensory information in a statistically efficient manner. We also show that variables such as task complexity can influence young children's behavior to a greater extent than that of adults, leading their behavior to look sub-optimal. Our findings have important implications for how we should interpret failures in young children's ability to carry out sophisticated computations. These 'failures' need not be attributed to deficits in the fundamental computational capacity available to children early in development, but rather to ancillary immaturities in general cognitive abilities that mask the operation of these computations in specific situations.
Collapse
Affiliation(s)
- Vikranth R Bejjanki
- Department of Psychology, Hamilton College, Clinton, NY, USA.,Program in Neuroscience, Hamilton College, Clinton, NY, USA
| | - Emily R Randrup
- Department of Psychology, Hamilton College, Clinton, NY, USA
| | | |
Collapse
|
40
|
Distance perception during self-movement. Hum Mov Sci 2019; 67:102496. [PMID: 31301557 DOI: 10.1016/j.humov.2019.102496] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 07/03/2019] [Accepted: 07/04/2019] [Indexed: 11/20/2022]
Abstract
The perception of distance in open fields was widely studied with static observers. However, it is a fact that we and the world around us are in continuous relative movement, and that our perceptual experience is shaped by the complex interactions between our senses and the perception of our self-motion. This poses interesting questions about how our nervous system integrates this multisensory information to resolve specific tasks of our daily life, for example, distance estimation. This study provides new evidence about how visual and motor self-motion information affects our perception of distance and a hypothesis about how these two sources of information can be integrated to calibrate the estimation of distance. This model accounts for the biases found when visual and proprioceptive information is inconsistent.
Collapse
|
41
|
Shea N, Frith CD. The Global Workspace Needs Metacognition. Trends Cogn Sci 2019; 23:560-571. [DOI: 10.1016/j.tics.2019.04.007] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 02/12/2019] [Accepted: 04/22/2019] [Indexed: 12/20/2022]
|
42
|
Gustafsson L. A Case of Near-Optimal Sensory Integration Based on Kohonen Self-Organizing Maps. Neural Comput 2019; 31:1419-1429. [PMID: 31113302 DOI: 10.1162/neco_a_01200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This letter shows by digital simulation that a simple rule applied to one-dimensional self-organized maps for integrating sensory perceptions from two identical sources yielding position information as integers, corrupted by independent noise sources, yields almost statistically optimal results for position estimation as determined by maximum likelihood estimation. There is no learning of the corrupting noise sources nor is any information about the statistics of the noise sources available to the integrating process. The simple rule employed yields a measure of the quality of the estimated position of the source. The letter also shows that if the Bayesian estimates, which are rational numbers, are rounded in order to comply with the stipulation that integers be identified, the Bayesian estimation will have a larger variance than the proposed integration.
Collapse
Affiliation(s)
- Lennart Gustafsson
- Department of Computer Science, Electrical and Space Engineering, Luleå University of Engineering, 971 87 Luleå, Sweden
| |
Collapse
|
43
|
Zhang WH, Wang H, Chen A, Gu Y, Lee TS, Wong KM, Wu S. Complementary congruent and opposite neurons achieve concurrent multisensory integration and segregation. eLife 2019; 8:43753. [PMID: 31120416 PMCID: PMC6565362 DOI: 10.7554/elife.43753] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 05/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our brain perceives the world by exploiting multisensory cues to extract information about various aspects of external stimuli. The sensory cues from the same stimulus should be integrated to improve perception, and otherwise segregated to distinguish different stimuli. In reality, however, the brain faces the challenge of recognizing stimuli without knowing in advance the sources of sensory cues. To address this challenge, we propose that the brain conducts integration and segregation concurrently with complementary neurons. Studying the inference of heading-direction via visual and vestibular cues, we develop a network model with two reciprocally connected modules modeling interacting visual-vestibular areas. In each module, there are two groups of neurons whose tunings under each sensory cue are either congruent or opposite. We show that congruent neurons implement integration, while opposite neurons compute cue disparity information for segregation, and the interplay between two groups of neurons achieves efficient multisensory information processing.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong.,Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, Primate Research Center, East China Normal University, Shanghai, China
| | - Yong Gu
- Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Tai Sing Lee
- Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - Ky Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Si Wu
- School of Electronics Engineering and Computer Science, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| |
Collapse
|
44
|
Gollo LL, Karim M, Harris JA, Morley JW, Breakspear M. Hierarchical and Nonlinear Dynamics in Prefrontal Cortex Regulate the Precision of Perceptual Beliefs. Front Neural Circuits 2019; 13:27. [PMID: 31068794 PMCID: PMC6491505 DOI: 10.3389/fncir.2019.00027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 03/29/2019] [Indexed: 11/13/2022] Open
Abstract
Actions are shaped not only by the content of our percepts but also by our confidence in them. To study the cortical representation of perceptual precision in decision making, we acquired functional imaging data whilst participants performed two vibrotactile forced-choice discrimination tasks: a fast-slow judgment, and a same-different judgment. The first task requires a comparison of the perceived vibrotactile frequencies to decide which one is faster. However, the second task requires that the estimated difference between those frequencies is weighed against the precision of each percept-if both stimuli are very precisely perceived, then any slight difference is more likely to be identified than if the percepts are uncertain. We additionally presented either pure sinusoidal or temporally degraded "noisy" stimuli, whose frequency/period differed slightly from cycle to cycle. In this way, we were able to manipulate the perceptual precision. We report a constellation of cortical regions in the rostral prefrontal cortex (PFC), dorsolateral PFC (DLPFC) and superior frontal gyrus (SFG) associated with the perception of stimulus difference, the presence of stimulus noise and the interaction between these factors. Dynamic causal modeling (DCM) of these data suggested a nonlinear, hierarchical model, whereby activity in the rostral PFC (evoked by the presence of stimulus noise) mutually interacts with activity in the DLPFC (evoked by stimulus differences). This model of effective connectivity outperformed competing models with serial and parallel interactions, hence providing a unique insight into the hierarchical architecture underlying the representation and appraisal of perceptual belief and precision in the PFC.
Collapse
Affiliation(s)
- Leonardo L Gollo
- QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,Centre of Excellence for Integrative Brain Function, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia
| | - Muhsin Karim
- School of Psychiatry, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,The Black Dog Institute, Sydney, NSW, Australia
| | - Justin A Harris
- School of Psychology, The University of Sydney, Sydney, NSW, Australia
| | - John W Morley
- School of Medicine, Western Sydney University, Sydney, NSW, Australia
| | - Michael Breakspear
- QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,Centre of Excellence for Integrative Brain Function, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia.,School of Psychiatry, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,The Black Dog Institute, Sydney, NSW, Australia.,Metro North Mental Health Service, Brisbane, QLD, Australia.,Hunter Medical Research Institute, University of Newcastle, New Lambton Heights, NSW, Australia
| |
Collapse
|
45
|
Stengård E, van den Berg R. Imperfect Bayesian inference in visual perception. PLoS Comput Biol 2019; 15:e1006465. [PMID: 30998675 PMCID: PMC6472731 DOI: 10.1371/journal.pcbi.1006465] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 03/08/2019] [Indexed: 11/24/2022] Open
Abstract
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This "imperfect Bayesian" model convincingly outperformed the "flawless Bayesian" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.
Collapse
Affiliation(s)
- Elina Stengård
- Department of Psychology, University of Uppsala, Uppsala, Sweden
| | | |
Collapse
|
46
|
Toscano JC, Lansing CR. Age-Related Changes in Temporal and Spectral Cue Weights in Speech. LANGUAGE AND SPEECH 2019; 62:61-79. [PMID: 29103359 DOI: 10.1177/0023830917737112] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Listeners weight acoustic cues in speech according to their reliability, but few studies have examined how cue weights change across the lifespan. Previous work has suggested that older adults have deficits in auditory temporal discrimination, which could affect the reliability of temporal phonetic cues, such as voice onset time (VOT), and in turn, impact speech perception in real-world listening environments. We addressed this by examining younger and older adults' use of VOT and onset F0 (a secondary phonetic cue) for voicing judgments (e.g., /b/ vs. /p/), using both synthetic and naturally produced speech. We found age-related differences in listeners' use of the two voicing cues, such that older adults relied more heavily on onset F0 than younger adults, even though this cue is less reliable in American English. These results suggest that phonetic cue weights continue to change across the lifespan.
Collapse
|
47
|
Keefe BD, Suray PA, Watt SJ. A margin for error in grasping: hand pre-shaping takes into account task-dependent changes in the probability of errors. Exp Brain Res 2019; 237:1063-1075. [PMID: 30747260 PMCID: PMC6430761 DOI: 10.1007/s00221-019-05489-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 02/04/2019] [Indexed: 01/12/2023]
Abstract
Ideal grasping movements should maintain an appropriate probability of success, while controlling movement-related costs, in the presence of varying visual (and motor) uncertainty. It is often assumed that the probability of errors is managed by adjusting a margin for error in hand opening (e.g., opening the hand wider with increased visual uncertainty). This idea is intuitive, but non-trivial. It implies not only that the brain can estimate the amount of uncertainty, but also that it can compute how different possible alterations to the movement will affect the probability of errors—which we term the ‘probability landscape’. Previous work suggests the amount of uncertainty is factored into grasping movements. Our aim was to determine whether grasping movements are also sensitive to the probability landscape. Subjects completed three different grasping tasks, with naturally different probability landscapes, such that appropriate margin-for-error responses to increased uncertainty were qualitatively different (opening the hand wider, the same amount, or less wide). We increased visual uncertainty by blurring vision, and by covering one eye. Movements were performed without visual feedback to isolate uncertainty in the brain’s initial estimate of object properties. Changes to hand opening in response to increased visual uncertainty closely resembled those predicted by the margin-for-error account, suggesting that grasping is sensitive to the probability landscape associated with different tasks. Our findings therefore support the intuitive idea that grasping movements employ a true margin-for-error mechanism, which exerts active control over the probability of errors across changing circumstances.
Collapse
Affiliation(s)
- Bruce D Keefe
- School of Psychology, Bangor University, Penrallt Rd., Bangor, Gwynedd, LL57 2AS, UK
| | - Pierre-Arthur Suray
- School of Psychology, Bangor University, Penrallt Rd., Bangor, Gwynedd, LL57 2AS, UK
| | - Simon J Watt
- School of Psychology, Bangor University, Penrallt Rd., Bangor, Gwynedd, LL57 2AS, UK.
| |
Collapse
|
48
|
Legge ELG. Comparative spatial memory and cue use: The contributions of Marcia L. Spetch to the study of small-scale spatial cognition. Behav Processes 2019; 159:65-79. [PMID: 30611849 DOI: 10.1016/j.beproc.2018.12.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 12/23/2018] [Accepted: 12/23/2018] [Indexed: 11/25/2022]
Abstract
Dr. Marcia Spetch is a Canadian experimental psychologist who specializes in the study of comparative cognition. Her research over the past four decades has covered many diverse topics, but focused primarily on the comparative study of small-scale spatial cognition, navigation, decision making, and risky choice. Over the course of her career Dr. Spetch has had a profound influence on the study of these topics, and for her work she was named a Fellow of the Association for Psychological Science in 2012, and a Fellow of the Royal Society of Canada in 2017. In this review, I provide a biographical sketch of Dr. Spetch's academic career, and revisit her contributions to the study of small-scale spatial cognition in two broad areas: the use of environmental geometric cues, and how animals cope with cue conflict. The goal of this review is to highlight the contributions of Dr. Spetch, her students, and her collaborators to the field of comparative cognition and the study of small-scale spatial cognition. As such, this review stands to serve as a tribute and testament to Dr. Spetch's scientific legacy.
Collapse
Affiliation(s)
- Eric L G Legge
- Department of Psychology, MacEwan University, 10700 - 104 Avenue, City Centre Campus, Edmonton, AB, T5J 4S2, Canada.
| |
Collapse
|
49
|
Ursino M, Cuppini C, Magosso E, Beierholm U, Shams L. Explaining the Effect of Likelihood Manipulation and Prior Through a Neural Network of the Audiovisual Perception of Space. Multisens Res 2019; 32:111-144. [PMID: 31059469 DOI: 10.1163/22134808-20191324] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 01/04/2019] [Indexed: 11/19/2022]
Abstract
Results in the recent literature suggest that multisensory integration in the brain follows the rules of Bayesian inference. However, how neural circuits can realize such inference and how it can be learned from experience is still the subject of active research. The aim of this work is to use a recent neurocomputational model to investigate how the likelihood and prior can be encoded in synapses, and how they affect audio-visual perception, in a variety of conditions characterized by different experience, different cue reliabilities and temporal asynchrony. The model considers two unisensory networks (auditory and visual) with plastic receptive fields and plastic crossmodal synapses, trained during a learning period. During training visual and auditory stimuli are more frequent and more tuned close to the fovea. Model simulations after training have been performed in crossmodal conditions to assess the auditory and visual perception bias: visual stimuli were positioned at different azimuth (±10° from the fovea) coupled with an auditory stimulus at various audio-visual distances (±20°). The cue reliability has been altered by using visual stimuli with two different contrast levels. Model predictions are compared with behavioral data. Results show that model predictions agree with behavioral data, in a variety of conditions characterized by a different role of prior and likelihood. Finally, the effect of a different unimodal or crossmodal prior, re-learning, temporal correlation among input stimuli, and visual damage (hemianopia) are tested, to reveal the possible use of the model in the clarification of important multisensory problems.
Collapse
Affiliation(s)
- Mauro Ursino
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Cristiano Cuppini
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Elisa Magosso
- 1Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - Ulrik Beierholm
- 2Department of Psychology, Durham University, United Kingdom
| | - Ladan Shams
- 3Department of Psychology, Department of BioEngineering, Interdepartmental Neuroscience Program, University of California, Los Angeles, CA, USA
| |
Collapse
|
50
|
Jepma M, Koban L, van Doorn J, Jones M, Wager TD. Behavioural and neural evidence for self-reinforcing expectancy effects on pain. Nat Hum Behav 2018; 2:838-855. [PMID: 31558818 PMCID: PMC6768437 DOI: 10.1038/s41562-018-0455-8] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 09/19/2018] [Indexed: 01/30/2023]
Abstract
Beliefs and expectations often persist despite evidence to the contrary. Here we examine two potential mechanisms underlying such 'self-reinforcing' expectancy effects in the pain domain: modulation of perception and biased learning. In two experiments, cues previously associated with symbolic representations of high or low temperatures preceded painful heat. We examined trial-to-trial dynamics in participants' expected pain, reported pain and brain activity. Subjective and neural pain responses assimilated towards cue-based expectations, and pain responses in turn predicted subsequent expectations, creating a positive dynamic feedback loop. Furthermore, we found evidence for a confirmation bias in learning: higher- and lower-than-expected pain triggered greater expectation updating for high- and low-pain cues, respectively. Individual differences in this bias were reflected in the updating of pain-anticipatory brain activity. Computational modelling provided converging evidence that expectations influence both perception and learning. Together, perceptual assimilation and biased learning promote self-reinforcing expectations, helping to explain why beliefs can be resistant to change.
Collapse
Affiliation(s)
- Marieke Jepma
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands.
- Department of Psychology and Neuroscience and Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| | - Leonie Koban
- Department of Psychology and Neuroscience and Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA
| | - Johnny van Doorn
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Matt Jones
- Department of Psychology and Neuroscience and Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA
| | - Tor D Wager
- Department of Psychology and Neuroscience and Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA
| |
Collapse
|