1
|
Lunau K, Dyer AG. The modelling of flower colour: spectral purity or colour contrast as biologically relevant descriptors of flower colour signals for bees depending upon the perceptual task. PLANT BIOLOGY (STUTTGART, GERMANY) 2024. [PMID: 38958933 DOI: 10.1111/plb.13682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 05/07/2024] [Indexed: 07/04/2024]
Abstract
Flower colour is an important mediator of plant-pollinator interactions. While the reflectance of light from the flower surface and background are governed by physical properties, the perceptual interpretation of such information is generated by complex multilayered visual processing. Should quantitative modelling of flower signals strive for repeatable consistency enabled by parameter simplification, or should modelling reflect the dynamic way in which bees are known to process signals? We discuss why colour is an interpretation of spectral information by the brain of an animal. Different species, or individuals within a species, may respond differently to colour signals depending on sensory apparatus and/or individual experience. Humans and bees have different spectral ranges, but colour theory is strongly rooted in human colour perception and many principles of colour vision appear to be common. We discuss bee colour perception based on physiological, neuroanatomical and behavioural evidence to provide a pathway for modelling flower colours. We examine whether flower petals and floral guides as viewed against spectrally different backgrounds should be considered as a simple colour contrast problem or require a more dynamic consideration of how bees make perceptual decisions. We discuss that plants such as deceptive orchids may present signals to exploit bee perception, whilst many plants do provide honest signalling where perceived saturation indicates the probability of collecting nutritional rewards towards the centre of a flower that then facilitates effective pollination.
Collapse
Affiliation(s)
- K Lunau
- Faculty of Mathematics and Natural Sciences, Institute of Sensory Ecology, Heinrich-Heine University, Düsseldorf, Germany
| | - A G Dyer
- Department of Physiology, Monash University, Clayton, Australia
- Institut für Entwicklungsbiologie, und Neurobiologie, Johannes Gutenberg Universität, Mainz, Germany
| |
Collapse
|
2
|
Durand JB, Marchand S, Nasres I, Laeng B, De Castro V. Illusory light drives pupil responses in primates. J Vis 2024; 24:14. [PMID: 39046721 PMCID: PMC11271809 DOI: 10.1167/jov.24.7.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 06/07/2024] [Indexed: 07/25/2024] Open
Abstract
In humans, the eye pupils respond to both physical light sensed by the retina and mental representations of light produced by the brain. Notably, our pupils constrict when a visual stimulus is illusorily perceived brighter, even if retinal illumination is constant. However, it remains unclear whether such perceptual penetrability of pupil responses is an epiphenomenon unique to humans or whether it represents an adaptive mechanism shared with other animals to anticipate variations in retinal illumination between successive eye fixations. To address this issue, we measured the pupil responses of both humans and macaque monkeys exposed to three chromatic versions (cyan, magenta, and yellow) of the Asahi brightness illusion. We found that the stimuli illusorily perceived brighter or darker trigger differential pupil responses that are very similar in macaques and human participants. Additionally, we show that this phenomenon exhibits an analogous cyan bias in both primate species. Beyond evincing the macaque monkey as a relevant model to study the perceptual penetrability of pupil responses, our results suggest that this phenomenon is tuned to ecological conditions because the exposure to a "bright cyan-bluish sky" may be associated with increased risks of dazzle and retinal damages.
Collapse
Affiliation(s)
- Jean-Baptiste Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Sarah Marchand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Ilyas Nasres
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Vanessa De Castro
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
3
|
Qiu T, An Q, Wang J, Wang J, Qiu CW, Li S, Lv H, Cai M, Wang J, Cong L, Qu S. Vision-driven metasurfaces for perception enhancement. Nat Commun 2024; 15:1631. [PMID: 38388545 PMCID: PMC10883922 DOI: 10.1038/s41467-024-45296-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/16/2024] [Indexed: 02/24/2024] Open
Abstract
Metasurfaces have exhibited unprecedented degree of freedom in manipulating electromagnetic (EM) waves and thus provide fantastic front-end interfaces for smart systems. Here we show a framework for perception enhancement based on vision-driven metasurface. Human's eye movements are matched with microwave radiations to extend the humans' perception spectrum. By this means, our eyes can "sense" visual information and invisible microwave information. Several experimental demonstrations are given for specific implementations, including a physiological-signal-monitoring system, an "X-ray-glasses" system, a "glimpse-and-forget" tracking system and a speech reception system for deaf people. Both the simulation and experiment results verify evident advantages in perception enhancement effects and improving information acquisition efficiency. This framework can be readily integrated into healthcare systems to monitor physiological signals and to offer assistance for people with disabilities. This work provides an alternative framework for perception enhancement and may find wide applications in healthcare, wearable devices, search-and-rescue and others.
Collapse
Affiliation(s)
- Tianshuo Qiu
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, China
- Fundamentals Department, Air Force Engineering University, Xi'an, China
- State Key Laboratory of Millimeter Waves, Southeast University, Nanjing, China
| | - Qiang An
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, China
| | - Jianqi Wang
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, China.
| | - Jiafu Wang
- Aerospace metamaterials laboratory of SuZhou National Laboratory, Suzhou, China.
| | - Cheng-Wei Qiu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore.
| | - Shiyong Li
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Hao Lv
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, China.
| | - Ming Cai
- Fundamentals Department, Air Force Engineering University, Xi'an, China
| | - Jianyi Wang
- Department of Neurology, the First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Lin Cong
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, China
| | - Shaobo Qu
- Aerospace metamaterials laboratory of SuZhou National Laboratory, Suzhou, China.
| |
Collapse
|
4
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
von Gal A, Boccia M, Nori R, Verde P, Giannini AM, Piccardi L. Neural networks underlying visual illusions: An activation likelihood estimation meta-analysis. Neuroimage 2023; 279:120335. [PMID: 37591478 DOI: 10.1016/j.neuroimage.2023.120335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 07/05/2023] [Accepted: 08/14/2023] [Indexed: 08/19/2023] Open
Abstract
Visual illusions have long been used to study visual perception and contextual integration. Neuroimaging studies employ illusions to identify the brain regions involved in visual perception and how they interact. We conducted an Activation Likelihood Estimation (ALE) meta-analysis and meta-analytic connectivity modeling on fMRI studies using static and motion illusions to reveal the neural signatures of illusory processing and to investigate the degree to which different areas are commonly recruited in perceptual inference. The resulting networks encompass ventral and dorsal regions, including the inferior and middle occipital cortices bilaterally in both types of illusions. The static and motion illusion networks selectively included the right posterior parietal cortex and the ventral premotor cortex respectively. Overall, these results describe a network of areas crucially involved in perceptual inference relying on feed-back and feed-forward interactions between areas of the ventral and dorsal visual pathways. The same network is proposed to be involved in hallucinogenic symptoms characteristic of schizophrenia and other disorders, with crucial implications in the use of illusions as biomarkers.
Collapse
Affiliation(s)
| | - Maddalena Boccia
- Department of Psychology, Sapienza University of Rome, Rome, Italy; Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Raffaella Nori
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Paola Verde
- Italian Air Force Experimental Flight Center, Aerospace Medicine Department, Pratica di Mare, Rome, Italy
| | | | - Laura Piccardi
- Department of Psychology, Sapienza University of Rome, Rome, Italy; San Raffaele Cassino Hospital, Cassino, FR, Italy
| |
Collapse
|
6
|
Freeman TCA, Powell G. Perceived speed at low luminance: Lights out for the Bayesian observer? Vision Res 2022; 201:108124. [PMID: 36193604 DOI: 10.1016/j.visres.2022.108124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 07/21/2022] [Accepted: 09/06/2022] [Indexed: 11/06/2022]
Abstract
To account for perceptual bias, Bayesian models use the precision of early sensory measurements to weight the influence of prior expectations. As precision decreases, prior expectations start to dominate. Important examples come from motion perception, where the slow-motion prior has been used to explain a variety of motion illusions in vision, hearing, and touch, many of which correlate appropriately with threshold measures of underlying precision. However, the Bayesian account seems defeated by the finding that moving objects appear faster in the dark, because most motion thresholds are worse at low luminance. Here we show this is not the case for speed discrimination. Our results show that performance improves at low light levels by virtue of a perceived contrast cue that is more salient in the dark. With this cue removed, discrimination becomes independent of luminance. However, we found perceived speed still increased in the dark for the same observers, and by the same amount. A possible interpretation is that motion processing is therefore not Bayesian, because our findings challenge a key assumption these models make, namely that the accuracy of early sensory measurements is independent of basic stimulus properties like luminance. However, a final experiment restored Bayesian behaviour by adding external noise, making discrimination worse and slowing perceived speed down. Our findings therefore suggest that motion is processed in a Bayesian fashion but based on noisy sensory measurements that also vary in accuracy.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom.
| | - Georgie Powell
- School of Psychology, Cardiff University, Tower Building, 70, Park Place, Cardiff CF10 3AT, United Kingdom
| |
Collapse
|
7
|
Yildiz GY, Evans BG, Chouinard PA. The Effects of Adding Pictorial Depth Cues to the Poggendorff Illusion. Vision (Basel) 2022; 6:44. [PMID: 35893761 PMCID: PMC9326572 DOI: 10.3390/vision6030044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/08/2022] [Accepted: 07/12/2022] [Indexed: 11/17/2022] Open
Abstract
We tested if the misapplication of perceptual constancy mechanisms might explain the perceived misalignment of the oblique lines in the Poggendorff illusion. Specifically, whether these mechanisms might treat the rectangle in the middle portion of the Poggendorff stimulus as an occluder in front of one long line appearing on either side, causing an apparent decrease in the rectangle's width and an apparent increase in the misalignment of the oblique lines. The study aimed to examine these possibilities by examining the effects of adding pictorial depth cues. In experiments 1 and 2, we presented a central rectangle composed of either large or small bricks to determine if this manipulation would change the perceived alignment of the oblique lines and the perceived width of the central rectangle, respectively. The experiments demonstrated no changes that would support a misapplication of perceptual constancy in driving the illusion, despite some evidence of perceptual size rescaling of the central rectangle. In experiment 3, we presented Poggendorff stimuli in front and at the back of a corridor background rich in texture and linear perspective depth cues to determine if adding these cues would affect the Poggendorff illusion. The central rectangle was physically large and small when presented in front and at the back of the corridor, respectively. The strength of the Poggendorff illusion varied as a function of the physical size of the central rectangle, and, contrary to our predictions, the addition of pictorial depth cues in both the central rectangle and the background decreased rather than increased the strength of the illusion. The implications of these results with regards to different theories are discussed. It could be the case that the illusion depends on both low-level and cognitive mechanisms and that deleterious effects occur on the former when the latter ascribes more certainty to the oblique lines being the same line receding into the distance.
Collapse
Affiliation(s)
- Gizem Y. Yildiz
- Department of Psychology, Counselling, & Therapy, La Trobe University, Melbourne 3086, Australia; (G.Y.Y.); (B.G.E.)
- Institute of Neuroscience and Medicine, INM-3, Research Center Jülich, 52425 Jülich, Germany
| | - Bailey G. Evans
- Department of Psychology, Counselling, & Therapy, La Trobe University, Melbourne 3086, Australia; (G.Y.Y.); (B.G.E.)
| | - Philippe A. Chouinard
- Department of Psychology, Counselling, & Therapy, La Trobe University, Melbourne 3086, Australia; (G.Y.Y.); (B.G.E.)
| |
Collapse
|
8
|
Laeng B, Nabil S, Kitaoka A. The Eye Pupil Adjusts to Illusorily Expanding Holes. Front Hum Neurosci 2022; 16:877249. [PMID: 35706480 PMCID: PMC9190027 DOI: 10.3389/fnhum.2022.877249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/29/2022] [Indexed: 11/24/2022] Open
Abstract
Some static patterns evoke the perception of an illusory expanding central region or “hole.” We asked observers to rate the magnitudes of illusory motion or expansion of black holes, and these predicted the degree of dilation of the pupil, measured with an eye tracker. In contrast, when the “holes” were colored (including white), i.e., emitted light, these patterns constricted the pupils, but the subjective expansions were also weaker compared with the black holes. The change rates of pupil diameters were significantly related to the illusory motion phenomenology only with the black holes. These findings can be accounted for within a perceiving-the-present account of visual illusions, where both the illusory motion and the pupillary adjustments represent compensatory mechanisms to the perception of the next moment, based on shared experiences with the ecological regularities of light.
Collapse
Affiliation(s)
- Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- *Correspondence: Bruno Laeng,
| | - Shoaib Nabil
- Department of Psychology, University of Oslo, Oslo, Norway
| | | |
Collapse
|
9
|
Etemadi L, Enander JMD, Jörntell H. Remote cortical perturbation dynamically changes the network solutions to given tactile inputs in neocortical neurons. iScience 2022; 25:103557. [PMID: 34977509 PMCID: PMC8689199 DOI: 10.1016/j.isci.2021.103557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/18/2021] [Accepted: 12/01/2021] [Indexed: 11/17/2022] Open
Abstract
The neocortex has a globally encompassing network structure, which for each given input constrains the possible combinations of neuronal activations across it. Hence, its network contains solutions. But in addition, the cortex has an ever-changing multidimensional internal state, causing each given input to result in a wide range of specific neuronal activations. Here we use intracellular recordings in somatosensory cortex (SI) neurons of anesthetized rats to show that remote, subthreshold intracortical electrical perturbation can impact such constraints on the responses to a set of spatiotemporal tactile input patterns. Whereas each given input pattern normally induces a wide set of preferred response states, when combined with cortical perturbation response states that did not otherwise occur were induced and consequently made other response states less likely. The findings indicate that the physiological network structure can dynamically change as the state of any given cortical region changes, thereby enabling a rich, multifactorial, perceptual capability. Tactile sensory input patterns evoke multi-structure cortical neuron responses Multi-structure responses are shown to be impacted by remote cortical regions Highly dynamic neuron responses reflects global cortical information integration Perception hence depends on globally distributed activity at the time of input
Collapse
Affiliation(s)
- Leila Etemadi
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Lund University, BMC F10 Tornavägen 10, 221 84 Lund, Sweden
| | - Jonas M D Enander
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Lund University, BMC F10 Tornavägen 10, 221 84 Lund, Sweden
| | - Henrik Jörntell
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Lund University, BMC F10 Tornavägen 10, 221 84 Lund, Sweden
| |
Collapse
|
10
|
Abstract
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
Collapse
Affiliation(s)
- Daniel Kaiser
- Justus-Liebig-Universität Gießen, Germany.,Philipps-Universität Marburg, Germany.,University of York, United Kingdom
| | - Radoslaw M Cichy
- Freie Universität Berlin, Germany.,Humboldt-Universität zu Berlin, Germany.,Bernstein Centre for Computational Neuroscience Berlin, Germany
| |
Collapse
|
11
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
12
|
Kaiser D, Inciuraite G, Cichy RM. Rapid contextualization of fragmented scene information in the human visual system. Neuroimage 2020; 219:117045. [PMID: 32540354 DOI: 10.1016/j.neuroimage.2020.117045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/24/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022] Open
Abstract
Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.
| | - Gabriele Inciuraite
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
13
|
Abstract
Humans and animals navigate uncertain environments by seeking information about the future. Remarkably, we often seek information even when it has no instrumental value for aiding our decisions - as if the information is a source of value in its own right. In recent years, there has been a flourishing of research into these non-instrumental information preferences and their implementation in the brain. Individuals value information about uncertain future rewards, and do so for multiple reasons, including valuing resolution of uncertainty and overweighting desirable information. The brain motivates this information seeking by tapping into some of the same circuitry as primary rewards like food and water. However, it also employs cortex and basal ganglia circuitry that predicts and values information as distinct from primary reward. Uncovering how these circuits cooperate will be fundamental to understanding information seeking and motivated behavior as a whole, in our increasingly complex and information-rich world.
Collapse
Affiliation(s)
| | - Ilya E Monosov
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA.,Department of Biomedical Engineering, Washington University, St. Louis, MO, USA.,Department of Neurosurgery, Washington University, St. Louis, MO, USA.,Pain Center, Washington University, St. Louis, MO, USA
| |
Collapse
|
14
|
Abstract
Arguably the most foundational principle in perception research is that our experience of the world goes beyond the retinal image; we perceive the distal environment itself, not the proximal stimulation it causes. Shape may be the paradigm case of such "unconscious inference": When a coin is rotated in depth, we infer the circular object it truly is, discarding the perspectival ellipse projected on our eyes. But is this really the fate of such perspectival shapes? Or does a tilted coin retain an elliptical appearance even when we know it's circular? This question has generated heated debate from Locke and Hume to the present; but whereas extant arguments rely primarily on introspection, this problem is also open to empirical test. If tilted coins bear a representational similarity to elliptical objects, then a circular coin should, when rotated, impair search for a distal ellipse. Here, nine experiments demonstrate that this is so, suggesting that perspectival shapes persist in the mind far longer than traditionally assumed. Subjects saw search arrays of three-dimensional "coins," and simply had to locate a distally elliptical coin. Surprisingly, rotated circular coins slowed search for elliptical targets, even when subjects clearly knew the rotated coins were circular. This pattern arose with static and dynamic cues, couldn't be explained by strategic responding or unfamiliarity, generalized across shape classes, and occurred even with sustained viewing. Finally, these effects extended beyond artificial displays to real-world objects viewed in naturalistic, full-cue conditions. We conclude that objects have a remarkably persistent dual character: their objective shape "out there," and their perspectival shape "from here."
Collapse
|
15
|
|
16
|
Kaliuzhna M, Stein T, Rusch T, Sekutowicz M, Sterzer P, Seymour KJ. No evidence for abnormal priors in early vision in schizophrenia. Schizophr Res 2019; 210:245-254. [PMID: 30587425 DOI: 10.1016/j.schres.2018.12.027] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Revised: 12/18/2018] [Accepted: 12/18/2018] [Indexed: 12/31/2022]
Abstract
The predictive coding account of psychosis postulates the abnormal formation of prior beliefs in schizophrenia, resulting in psychotic symptoms. One domain in which priors play a crucial role is visual perception. For instance, our perception of brightness, line length, and motion direction are not merely based on a veridical extraction of sensory input but are also determined by expectation (or prior) of the stimulus. Formation of such priors is thought to be governed by the statistical regularities within natural scenes. Recently, the use of such priors has been attributed to a specific set of well-documented visual illusions, supporting the idea that perception is biased toward what is statistically more probable within the environment. The Predictive Coding account of psychosis proposes that patients form abnormal representations of statistical regularities in natural scenes, leading to altered perceptual experiences. Here we use classical vision experiments involving a specific set of visual illusions to directly test this hypothesis. We find that perceptual judgments for both patients and control participants are biased in accordance with reported probability distributions of natural scenes. Thus, despite there being a suggested link between visual abnormalities and psychotic symptoms in schizophrenia, our results provide no support for the notion that altered formation of priors is a general feature of the disorder. These data call for a refinement in the predictions of quantitative models of psychosis.
Collapse
Affiliation(s)
- Mariia Kaliuzhna
- ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia; Clinical and Experimental Psychopathology Group, Department of Psychiatry, University of Geneva, Switzerland
| | - Timo Stein
- Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité Universitätsmedizin Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany; Department of Psychology, University of Amsterdam, the Netherlands
| | - Tessa Rusch
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistrasse 52, 20246 Hamburg, Germany
| | - Maria Sekutowicz
- Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité Universitätsmedizin Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany
| | - Philipp Sterzer
- Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité Universitätsmedizin Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany
| | - Kiley J Seymour
- ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia; Department of Psychiatry and Psychotherapy, Campus Charité Mitte, Charité Universitätsmedizin Berlin, Germany; School of Social Sciences and Psychology, Western Sydney University, New South Wales, Australia.
| |
Collapse
|
17
|
Kaiser D, Quek GL, Cichy RM, Peelen MV. Object Vision in a Structured World. Trends Cogn Sci 2019; 23:672-685. [PMID: 31147151 PMCID: PMC7612023 DOI: 10.1016/j.tics.2019.04.013] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/15/2019] [Accepted: 04/30/2019] [Indexed: 01/02/2023]
Abstract
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| |
Collapse
|
18
|
Gardner JL. Optimality and heuristics in perceptual neuroscience. Nat Neurosci 2019; 22:514-523. [PMID: 30804531 DOI: 10.1038/s41593-019-0340-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 01/16/2019] [Indexed: 11/09/2022]
Abstract
The foundation for modern understanding of how we make perceptual decisions about what we see or where to look comes from considering the optimal way to perform these behaviors. While statistical computation is useful for deriving the optimal solution to a perceptual problem, optimality requires perfect knowledge of priors and often complex computation. Accumulating evidence, however, suggests that optimal perceptual goals can be achieved or approximated more simply by human observers using heuristic approaches. Perceptual neuroscientists captivated by optimal explanations of sensory behaviors will fail in their search for the neural circuits and cortical processes that implement an optimal computation whenever that behavior is actually achieved through heuristics. This article provides a cross-disciplinary review of decision-making with the aim of building perceptual theory that uses optimality to set the computational goals for perceptual behavior but, through consideration of ecological, computational, and energetic constraints, incorporates how these optimal goals can be achieved through heuristic approximation.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California, USA.
| |
Collapse
|
19
|
de Winter J. Book Review: Modeling Human–System Interaction: Philosophical and Methodological Considerations, With Examples. ERGONOMICS IN DESIGN 2018. [DOI: 10.1177/1064804618795704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
20
|
Kaiser D, Cichy RM. Typical visual-field locations enhance processing in object-selective channels of human occipital cortex. J Neurophysiol 2018; 120:848-853. [DOI: 10.1152/jn.00229.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Natural environments consist of multiple objects, many of which repeatedly occupy similar locations within a scene. For example, hats are seen on people’s heads, while shoes are most often seen close to the ground. Such positional regularities bias the distribution of objects across the visual field: hats are more often encountered in the upper visual field, while shoes are more often encountered in the lower visual field. Here we tested the hypothesis that typical visual field locations of objects facilitate cortical processing. We recorded functional MRI while participants viewed images of objects that were associated with upper or lower visual field locations. Using multivariate classification, we show that object information can be more successfully decoded from response patterns in object-selective lateral occipital cortex (LO) when the objects are presented in their typical location (e.g., shoe in the lower visual field) than when they are presented in an atypical location (e.g., shoe in the upper visual field). In a functional connectivity analysis, we relate this benefit to increased coupling between LO and early visual cortex, suggesting that typical object positioning facilitates information propagation across the visual hierarchy. Together these results suggest that object representations in occipital visual cortex are tuned to the structure of natural environments. This tuning may support object perception in spatially structured environments. NEW & NOTEWORTHY In the real world, objects appear in predictable spatial locations. Hats, commonly appearing on people’s heads, often fall into the upper visual field. Shoes, mostly appearing on people’s feet, often fall into the lower visual field. Here we used functional MRI to demonstrate that such regularities facilitate cortical processing: Objects encountered in their typical locations are coded more efficiently, which may allow us to effortlessly recognize objects in natural environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
21
|
Typical visual-field locations facilitate access to awareness for everyday objects. Cognition 2018; 180:118-122. [PMID: 30029067 DOI: 10.1016/j.cognition.2018.07.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 07/10/2018] [Accepted: 07/12/2018] [Indexed: 11/20/2022]
Abstract
In real-world vision, humans are constantly confronted with complex environments that contain a multitude of objects. These environments are spatially structured, so that objects have different likelihoods of appearing in specific parts of the visual space. Our massive experience with such positional regularities prompts the hypothesis that the processing of individual objects varies in efficiency across the visual field: when objects are encountered in their typical locations (e.g., we are used to seeing lamps in the upper visual field and carpets in the lower visual field), they should be more efficiently perceived than when they are encountered in atypical locations (e.g., a lamp in the lower visual field and a carpet in the upper visual field). Here, we provide evidence for this hypothesis by showing that typical positioning facilitates an object's access to awareness. In two continuous flash suppression experiments, objects more efficiently overcame inter-ocular suppression when they were presented in visual-field locations that matched their typical locations in the environment, as compared to non-typical locations. This finding suggests that through extensive experience the visual system has adapted to the statistics of the environment. This adaptation may be particularly useful for rapid object individuation in natural scenes.
Collapse
|
22
|
Favela LH, Riley MA, Shockley K, Chemero A. Perceptually Equivalent Judgments Made Visually and via Haptic Sensory-Substitution Devices. ECOLOGICAL PSYCHOLOGY 2018. [DOI: 10.1080/10407413.2018.1473712] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Luis H. Favela
- Department of Philosophy and Cognitive Sciences Program, University of Central Florida
| | | | | | - Anthony Chemero
- Department of Philosophy and Department of Psychology, University of Cincinnati
| |
Collapse
|
23
|
Abstract
Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery.SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception.
Collapse
|
24
|
Anzulewicz A, Wierzchoń M. Shades of Awareness on the Mechanisms Underlying the Quality of Conscious Representations: A Commentary to Fazekas and Overgaard (). Cogn Sci 2018; 42:2095-2100. [PMID: 29349802 DOI: 10.1111/cogs.12578] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Revised: 11/14/2017] [Accepted: 11/15/2017] [Indexed: 11/29/2022]
Abstract
Fazekas and Overgaard () present a novel, multidimensional model that explains different ways in which conscious representations can be degraded. Moreover, the authors discuss possible mechanisms that underlie different kinds of degradation, primarily those related to attentional processing. In this letter, we argue that the proposed mechanisms are not sufficient. We propose that (1) attentional mechanisms work differently at various processing stages; and (2) factors that are independent of attentional ones, such as expectation, previous experience, and context, should be accounted for if we are aiming to construct a comprehensive model of conscious visual perception.
Collapse
Affiliation(s)
- Anna Anzulewicz
- Consciousness Lab, Institute of Psychology, Jagiellonian University
| | - Michał Wierzchoń
- Consciousness Lab, Institute of Psychology, Jagiellonian University
| |
Collapse
|
25
|
Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity. eNeuro 2017; 4:eN-NWR-0361-16. [PMID: 28534043 PMCID: PMC5439184 DOI: 10.1523/eneuro.0361-16.2017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 04/26/2017] [Accepted: 04/27/2017] [Indexed: 11/21/2022] Open
Abstract
Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.
Collapse
|
26
|
Pitti A, Pugach G, Gaussier P, Shimada S. Spatio-Temporal Tolerance of Visuo-Tactile Illusions in Artificial Skin by Recurrent Neural Network with Spike-Timing-Dependent Plasticity. Sci Rep 2017; 7:41056. [PMID: 28106139 PMCID: PMC5247701 DOI: 10.1038/srep41056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 12/16/2016] [Indexed: 12/15/2022] Open
Abstract
Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200-300 ms is the critical limit of the brain to represent one's own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.
Collapse
Affiliation(s)
- Alexandre Pitti
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Ganna Pugach
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France.,Energy and Metallurgy Department, Donetsk National Technical University, Krasnoarmeysk, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Sotaro Shimada
- Dept. of Electronics and Bioinformatics, School of Science and Technology, Meiji University, Kawasaki, Japan
| |
Collapse
|
27
|
|
28
|
Mandik P. The Myth of Color Sensations, or How Not to See a Yellow Banana. Top Cogn Sci 2016; 9:228-240. [PMID: 28000985 DOI: 10.1111/tops.12238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Revised: 10/10/2016] [Accepted: 10/10/2016] [Indexed: 11/27/2022]
Abstract
I argue against a class of philosophical views of color perception, especially insofar as such views posit the existence of color sensations. I argue against the need to posit such nonconceptual mental intermediaries between the stimulus and the eventual conceptualized perceptual judgment. Central to my arguments are considerations of certain color illusions. Such illusions are best explained by reference to high-level, conceptualized knowledge concerning, for example, object identity, likely lighting conditions, and material composition of the distal stimulus. Such explanations obviate the need to appeal to nonconceputal mental links in the causal chains eventuating in conceptualized color discriminations.
Collapse
Affiliation(s)
- Pete Mandik
- Department of Philosophy, William Paterson University
| |
Collapse
|
29
|
Abstract
Visual illusions occur when information from images are perceived differently from the actual physical properties of the stimulus in terms of brightness, size, colour and/or motion. Illusions are therefore important tools for sensory perception research and from an ecological perspective, relevant for visually guided animals viewing signals in heterogeneous environments. Here, we tested whether fish perceived a lightness cube illusion in which identical coloured targets appear (for humans) to return different spectral outputs depending on the apparent amount of illumination they are perceived to be under. Triggerfish (Rhinecanthus aculeatus) were trained to peck at coloured targets to receive food rewards, and were shown to experience similar shifts in colour perception when targets were placed in illusory shadows. Fish therefore appear to experience similar simultaneous contrast mechanisms to humans, even when targets are embedded in complex, scene-type illusions. Studies such as these help unlock the fundamental principles of visual system mechanisms.
Collapse
|
30
|
Shahmoradi A. Why do we need perceptual content? PHILOSOPHICAL PSYCHOLOGY 2016. [DOI: 10.1080/09515089.2016.1142071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
31
|
Emberson LL, Rubinstein DY. Statistical learning is constrained to less abstract patterns in complex sensory input (but not the least). Cognition 2016; 153:63-78. [PMID: 27139779 PMCID: PMC4905776 DOI: 10.1016/j.cognition.2016.04.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 04/12/2016] [Accepted: 04/15/2016] [Indexed: 11/06/2022]
Abstract
The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation.
Collapse
Affiliation(s)
- Lauren L Emberson
- Brain and Cognitive Sciences Department, University of Rochester, USA; Psychology Department, Princeton University, USA.
| | - Dani Y Rubinstein
- Psychology Department, Cornell University, USA; Department of Neuroscience, Brown University, USA; Section on Integrative Neuroimaging, Clinical and Translational Neuroscience Branch, National Institute of Mental Health, NIH, Bethesda, MD, USA
| |
Collapse
|
32
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|
33
|
Silverstein SM. Visual Perception Disturbances in Schizophrenia: A Unified Model. NEBRASKA SYMPOSIUM ON MOTIVATION. NEBRASKA SYMPOSIUM ON MOTIVATION 2016; 63:77-132. [PMID: 27627825 DOI: 10.1007/978-3-319-30596-7_4] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
34
|
Purves D, Morgenstern Y, Wojtach WT. Perception and Reality: Why a Wholly Empirical Paradigm is Needed to Understand Vision. Front Syst Neurosci 2015; 9:156. [PMID: 26635546 PMCID: PMC4649043 DOI: 10.3389/fnsys.2015.00156] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 10/29/2015] [Indexed: 11/13/2022] Open
Abstract
A central puzzle in vision science is how perceptions that are routinely at odds with physical measurements of real world properties can arise from neural responses that nonetheless lead to effective behaviors. Here we argue that the solution depends on: (1) rejecting the assumption that the goal of vision is to recover, however imperfectly, properties of the world; and (2) replacing it with a paradigm in which perceptions reflect biological utility based on past experience rather than objective features of the environment. Present evidence is consistent with the conclusion that conceiving vision in wholly empirical terms provides a plausible way to understand what we see and why.
Collapse
Affiliation(s)
- Dale Purves
- Duke Institute for Brain Sciences, Duke UniversityDurham, NC, USA
| | | | - William T. Wojtach
- Duke Institute for Brain Sciences, Duke UniversityDurham, NC, USA
- Duke-NUS Graduate Medical SchoolSingapore, Singapore
| |
Collapse
|
35
|
|
36
|
Contrast coding in the electrosensory system: parallels with visual computation. Nat Rev Neurosci 2015; 16:733-44. [PMID: 26558527 DOI: 10.1038/nrn4037] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
To identify and interact with moving objects, including other members of the same species, an animal's nervous system must correctly interpret patterns of contrast in the physical signals (such as light or sound) that it receives from the environment. In weakly electric fish, the motion of objects in the environment and social interactions with other fish create complex patterns of contrast in the electric fields that they produce and detect. These contrast patterns can extend widely over space and time and represent a multitude of relevant features, as is also true for other sensory systems. Mounting evidence suggests that the computational principles underlying contrast coding in electrosensory neural networks are conserved elements of spatiotemporal processing that show strong parallels with the vertebrate visual system.
Collapse
|
37
|
Lupyan G. Object knowledge changes visual appearance: semantic effects on color afterimages. Acta Psychol (Amst) 2015; 161:117-30. [PMID: 26386775 DOI: 10.1016/j.actpsy.2015.08.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Revised: 08/10/2015] [Accepted: 08/11/2015] [Indexed: 11/25/2022] Open
Abstract
According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models.
Collapse
|
38
|
Cognitive Penetrability of Perception in the Age of Prediction: Predictive Systems are Penetrable Systems. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/s13164-015-0253-4] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
39
|
Sawayama M, Kimura E. Stain on texture: Perception of a dark spot having a blurred edge on textured backgrounds. Vision Res 2015; 109:209-20. [DOI: 10.1016/j.visres.2014.11.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2014] [Revised: 10/12/2014] [Accepted: 11/12/2014] [Indexed: 11/25/2022]
|
40
|
|
41
|
Morgenstern Y, Rukmini DV, Monson BB, Purves D. Properties of artificial neurons that report lightness based on accumulated experience with luminance. Front Comput Neurosci 2014; 8:134. [PMID: 25404912 PMCID: PMC4217489 DOI: 10.3389/fncom.2014.00134] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2013] [Accepted: 10/01/2014] [Indexed: 11/13/2022] Open
Abstract
The responses of visual neurons in experimental animals have been extensively characterized. To ask whether these responses are consistent with a wholly empirical concept of visual perception, we optimized simple neural networks that responded according to the cumulative frequency of occurrence of local luminance patterns in retinal images. Based on this estimation of accumulated experience, the neuron responses showed classical center-surround receptive fields, luminance gain control and contrast gain control, the key properties of early level visual neurons determined in animal experiments. These results imply that a major purpose of pre-cortical neuronal circuitry is to contend with the inherently uncertain significance of luminance values in natural stimuli.
Collapse
Affiliation(s)
- Yaniv Morgenstern
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore
| | - Dhara V Rukmini
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore
| | - Brian B Monson
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore
| | - Dale Purves
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore ; Department of Neurobiology, Duke University Medical Center Durham, NC, USA ; Duke Institute for Brain Sciences, Duke University Durham, NC, USA
| |
Collapse
|
42
|
Which way is down? Positional distortion in the tilt illusion. PLoS One 2014; 9:e110729. [PMID: 25343463 PMCID: PMC4208767 DOI: 10.1371/journal.pone.0110729] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Accepted: 09/25/2014] [Indexed: 11/19/2022] Open
Abstract
Contextual information can have a huge impact on our sensory experience. The tilt illusion is a classic example of contextual influence exerted by an oriented surround on a target's perceived orientation. Traditionally, the tilt illusion has been described as the outcome of inhibition between cortical neurons with adjacent receptive fields and a similar preference for orientation. An alternative explanation is that tilted contexts could produce a re-calibration of the subjective frame of reference. Although the distinction is subtle, only the latter model makes clear predictions for unoriented stimuli. In the present study, we tested one such prediction by asking four naive subjects to estimate three positions (4, 6, and 8 o'clock) on an imaginary clock face within a tilted surround. To indicate their estimates, they used either an unoriented dot or a line segment, with one endpoint at fixation in the middle of the surround. The surround's tilt was randomly chosen from a set of orientations (± 75°, ± 65°, ± 55°, ± 45°, ± 35°, ± 25°, ± 15°, ± 5° with respect to vertical) across trials. Our results showed systematic biases consistent with the tilt illusion in both conditions. Biases were largest when observers attempted to estimate the 4 and 8 o'clock positions, but there was no significant difference between data gathered with the dot and data gathered with the line segment. A control experiment confirmed that biases were better accounted for by a local coordinate shift than to torsional eye movements induced by the tilted context. This finding supports the idea that tilted contexts distort perceived positions as well as perceived orientations and cannot be readily explained by lateral interactions between orientation selective cells in V1.
Collapse
|
43
|
Morgenstern Y, Rostami M, Purves D. Properties of artificial networks evolved to contend with natural spectra. Proc Natl Acad Sci U S A 2014; 111 Suppl 3:10868-72. [PMID: 25024184 PMCID: PMC4113924 DOI: 10.1073/pnas.1402669111] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Understanding why spectra that are physically the same appear different in different contexts (color contrast), whereas spectra that are physically different appear similar (color constancy) presents a major challenge in vision research. Here, we show that the responses of biologically inspired neural networks evolved on the basis of accumulated experience with spectral stimuli automatically generate contrast and constancy. The results imply that these phenomena are signatures of a strategy that biological vision uses to circumvent the inverse optics problem as it pertains to light spectra, and that double-opponent neurons in early-level vision evolve to serve this purpose. This strategy provides a way of understanding the peculiar relationship between the objective world and subjective color experience, as well as rationalizing the relevant visual circuitry without invoking feature detection or image representation.
Collapse
Affiliation(s)
- Yaniv Morgenstern
- Neuroscience and Behavioral Disorders Program, Duke-National University of Singapore Graduate Medical School, Singapore 169857; and
| | - Mohammad Rostami
- Neuroscience and Behavioral Disorders Program, Duke-National University of Singapore Graduate Medical School, Singapore 169857; and
| | - Dale Purves
- Neuroscience and Behavioral Disorders Program, Duke-National University of Singapore Graduate Medical School, Singapore 169857; andDuke Institute for Brain Sciences, Duke University, Durham, NC 27708
| |
Collapse
|
44
|
A neural code for looming and receding motion is distributed over a population of electrosensory ON and OFF contrast cells. J Neurosci 2014; 34:5583-94. [PMID: 24741048 DOI: 10.1523/jneurosci.4988-13.2014] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Object saliency is based on the relative local-to-background contrast in the physical signals that underlie perceptual experience. As such, contrast-detecting neurons (ON/OFF cells) are found in many sensory systems, responding respectively to increased or decreased intensity within their receptive field centers. This differential sensitivity suggests that ON and OFF cells initiate segregated streams of information for positive and negative sensory contrast. However, while recording in vivo from the ON and OFF cells of Apteronotus leptorhynchus, we report that the reversal of stimulus motion triggers paradoxical responses to electrosensory contrast. By considering the instantaneous firing rates of both ON and OFF cell populations, a bidirectionally symmetric representation of motion is achieved for both positive and negative contrast stimuli. Whereas the firing rates of the individual contrast detecting neurons convey scalar information, such as object distance, it is their sequential activation over longer timescales that track changes in the direction of movement.
Collapse
|
45
|
Proulx MJ. The perception of shape from shading in a new light. PeerJ 2014; 2:e363. [PMID: 24795853 PMCID: PMC4006223 DOI: 10.7717/peerj.363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Accepted: 04/05/2014] [Indexed: 11/20/2022] Open
Abstract
How do humans see three-dimensional shape based on two-dimensional shading? Much research has assumed that a ‘light from above’ bias solves the ambiguity of shape from shading. Counter to the ‘light from above’ bias, studies of Bayesian priors have found that such a bias can be swayed by other light cues. Despite the persuasive power of the Bayesian models, many new studies and books cite the original ‘light from above’ findings. Here I present a version of the Bayesian result that can be experienced. The perception of shape-from-shading was found here to be influenced by an external light source, even when the light was obstructed and did not directly illuminate a two-dimensional stimulus. The results imply that this effect is robust and not low-level in nature. The perception of shape from shading is not necessarily based on a hard-wired internal representation of lighting direction, but rather assesses the direction of lighting in the scene adaptively. Here, for the first time, is an experiential opportunity to see what the Bayesian models have supported all along.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Laboratory, Department of Psychology, University of Bath , UK
| |
Collapse
|
46
|
Purves D, Monson BB, Sundararajan J, Wojtach WT. How biological vision succeeds in the physical world. Proc Natl Acad Sci U S A 2014; 111:4750-5. [PMID: 24639506 PMCID: PMC3977276 DOI: 10.1073/pnas.1311309111] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Biological visual systems cannot measure the properties that define the physical world. Nonetheless, visually guided behaviors of humans and other animals are routinely successful. The purpose of this article is to consider how this feat is accomplished. Most concepts of vision propose, explicitly or implicitly, that visual behavior depends on recovering the sources of stimulus features either directly or by a process of statistical inference. Here we argue that, given the inability of the visual system to access the properties of the world, these conceptual frameworks cannot account for the behavioral success of biological vision. The alternative we present is that the visual system links the frequency of occurrence of biologically determined stimuli to useful perceptual and behavioral responses without recovering real-world properties. The evidence for this interpretation of vision is that the frequency of occurrence of stimulus patterns predicts many basic aspects of what we actually see. This strategy provides a different way of conceiving the relationship between objective reality and subjective experience, and offers a way to understand the operating principles of visual circuitry without invoking feature detection, representation, or probabilistic inference.
Collapse
Affiliation(s)
- Dale Purves
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School, Republic of Singapore 169857
- Department of Neurobiology, Duke University Medical Center, Durham, NC 27710; and
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27708
| | - Brian B. Monson
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School, Republic of Singapore 169857
| | - Janani Sundararajan
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School, Republic of Singapore 169857
| | | |
Collapse
|
47
|
Peelen MV, Kastner S. Attention in the real world: toward understanding its neural basis. Trends Cogn Sci 2014; 18:242-50. [PMID: 24630872 DOI: 10.1016/j.tics.2014.02.004] [Citation(s) in RCA: 123] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 02/06/2014] [Accepted: 02/10/2014] [Indexed: 11/28/2022]
Abstract
The efficient selection of behaviorally relevant objects from cluttered environments supports our everyday goals. Attentional selection has typically been studied in search tasks involving artificial and simplified displays. Although these studies have revealed important basic principles of attention, they do not explain how the brain efficiently selects familiar objects in complex and meaningful real-world scenes. Findings from recent neuroimaging studies indicate that real-world search is mediated by 'what' and 'where' attentional templates that are implemented in high-level visual cortex. These templates represent target-diagnostic properties and likely target locations, respectively, and are shaped by object familiarity, scene context, and memory. We propose a framework for real-world search that incorporates these recent findings and specifies directions for future study.
Collapse
Affiliation(s)
- Marius V Peelen
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068 Rovereto (TN), Italy.
| | - Sabine Kastner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| |
Collapse
|
48
|
Abstract
It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant's search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.
Collapse
|
49
|
|
50
|
Abstract
If a mental image is a rerepresentation of a perception, then properties such as luminance or brightness should also be conjured up in the image. We monitored pupil diameters with an infrared eye tracker while participants first saw and then generated mental images of shapes that varied in luminance or complexity, while looking at an empty gray background. Participants also imagined familiar scenarios (e.g., a “sunny sky” or a “dark room”) while looking at the same neutral screen. In all experiments, participants’ eye pupils dilated or constricted, respectively, in response to dark and bright imagined objects and scenarios. Shape complexity increased mental effort and pupillary sizes independently of shapes’ luminance. Because the participants were unable to voluntarily constrict their eyes’ pupils, the observed pupillary adjustments to imaginary light present a strong case for accounts of mental imagery as a process based on brain states similar to those that arise during perception.
Collapse
|