1
|
Su Y, Shi Z, Wachtler T. A Bayesian observer model reveals a prior for natural daylights in hue perception. Vision Res 2024; 220:108406. [PMID: 38626536 DOI: 10.1016/j.visres.2024.108406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/20/2024] [Accepted: 03/25/2024] [Indexed: 04/18/2024]
Abstract
Incorporating statistical characteristics of stimuli in perceptual processing can be highly beneficial for reliable estimation from noisy sensory measurements but may generate perceptual bias. According to Bayesian inference, perceptual biases arise from the integration of internal priors with noisy sensory inputs. In this study, we used a Bayesian observer model to derive biases and priors in hue perception based on discrimination data for hue ensembles with varying levels of chromatic noise. Our results showed that discrimination thresholds for isoluminant stimuli with hue defined by azimuth angle in cone-opponent color space exhibited a bimodal pattern, with lowest thresholds near a non-cardinal blue-yellow axis that aligns closely with the variation of natural daylights. Perceptual biases showed zero crossings around this axis, indicating repulsion away from yellow and attraction towards blue. These biases could be explained by the Bayesian observer model through a non-uniform prior with a preference for blue. Our findings suggest that visual processing takes advantage of knowledge of the distribution of colors in natural environments for hue perception.
Collapse
Affiliation(s)
- Yannan Su
- Faculty of Biology, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany.
| | - Zhuanghua Shi
- General and Experimental Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
| | - Thomas Wachtler
- Faculty of Biology, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany.
| |
Collapse
|
2
|
Charlton JA, Młynarski WF, Bai YH, Hermundstad AM, Goris RLT. Environmental dynamics shape perceptual decision bias. PLoS Comput Biol 2023; 19:e1011104. [PMID: 37289753 DOI: 10.1371/journal.pcbi.1011104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/13/2023] [Indexed: 06/10/2023] Open
Abstract
To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.
Collapse
Affiliation(s)
- Julie A Charlton
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | | | - Yoon H Bai
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
3
|
Chapman AF, Störmer VS. Efficient tuning of attention to narrow and broad ranges of task-relevant feature values. VISUAL COGNITION 2023. [DOI: 10.1080/13506285.2023.2192993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
4
|
Wedge-Roberts R, Aston S, Beierholm U, Kentridge R, Hurlbert A, Nardini M, Olkkonen M. Developmental changes in colour constancy in a naturalistic object selection task. Dev Sci 2023; 26:e13306. [PMID: 35943256 DOI: 10.1111/desc.13306] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/15/2022] [Accepted: 07/12/2022] [Indexed: 11/27/2022]
Abstract
When the illumination falling on a surface change, so does the reflected light. Despite this, adult observers are good at perceiving surfaces as relatively unchanging-an ability termed colour constancy. Very few studies have investigated colour constancy in infants, and even fewer in children. Here we asked whether there is a difference in colour constancy between children and adults; what the developmental trajectory is between six and 11 years; and whether the pattern of constancy across illuminations and reflectances differs between adults and children. To this end, we developed a novel, child-friendly computer-based object selection task. In this, observers saw a dragon's favourite sweet under a neutral illumination and picked the matching sweet from an array of eight seen under a different illumination (blue, yellow, red, or green). This set contained a reflectance match (colour constant; perfect performance) and a tristimulus match (colour inconstant). We ran two experiments, with two-dimensional scenes in one and three-dimensional renderings in the other. Twenty-six adults and 33 children took part in the first experiment; 26 adults and 40 children took part in the second. Children performed better than adults on this task, and their performance decreased with age in both experiments. We found differences across illuminations and sweets, but a similar pattern across both age groups. This unexpected finding might reflect a real decrease in colour constancy from childhood to adulthood, explained by developmental changes in the perceptual and cognitive mechanisms underpinning colour constancy, or differences in task strategies between children and adults. HIGHLIGHTS: Six- to 11-year-old children demonstrated better performance than adults on a colour constancy object selection task. Performance decreased with age over childhood. These findings may indicate development of cognitive strategies used to overcome automatic colour constancy mechanisms.
Collapse
Affiliation(s)
| | - Stacey Aston
- Department of Psychology, Durham University, Durham, UK
| | | | - Robert Kentridge
- Department of Psychology, Durham University, Durham, UK.,Azrieli Programme in Brain, Mind & Consciousnesses, Canadian Institute for Advanced Research, Toronto, Canada
| | - Anya Hurlbert
- Neuroscience, Institute of Biosciences, Newcastle University, Newcastle upon Tyne, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| | - Maria Olkkonen
- Department of Psychology, Durham University, Durham, UK.,Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
5
|
Cohen-Duwek H, Slovin H, Ezra Tsur E. Computational modeling of color perception with biologically plausible spiking neural networks. PLoS Comput Biol 2022; 18:e1010648. [PMID: 36301992 PMCID: PMC9642903 DOI: 10.1371/journal.pcbi.1010648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 11/08/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Biologically plausible computational modeling of visual perception has the potential to link high-level visual experiences to their underlying neurons' spiking dynamic. In this work, we propose a neuromorphic (brain-inspired) Spiking Neural Network (SNN)-driven model for the reconstruction of colorful images from retinal inputs. We compared our results to experimentally obtained V1 neuronal activity maps in a macaque monkey using voltage-sensitive dye imaging and used the model to demonstrate and critically explore color constancy, color assimilation, and ambiguous color perception. Our parametric implementation allows critical evaluation of visual phenomena in a single biologically plausible computational framework. It uses a parametrized combination of high and low pass image filtering and SNN-based filling-in Poisson processes to provide adequate color image perception while accounting for differences in individual perception.
Collapse
Affiliation(s)
- Hadar Cohen-Duwek
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra’anana, Israel
| | - Hamutal Slovin
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra’anana, Israel
| |
Collapse
|
6
|
Skelton AE, Maule J, Franklin A. Infant color perception: Insight into perceptual development. CHILD DEVELOPMENT PERSPECTIVES 2022; 16:90-95. [PMID: 35915666 PMCID: PMC9314692 DOI: 10.1111/cdep.12447] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
A remarkable amount of perceptual development occurs in the first year after birth. In this article, we spotlight the case of color perception. We outline how within just 6 months, infants go from very limited detection of color as newborns to a more sophisticated perception of color that enables them to make sense of objects and the world around them. We summarize the evidence that by 6 months, infants can perceive the dimensions of color and categorize it, and have at least rudimentary mechanisms to keep color perceptually constant despite variation in illumination. In addition, infants' sensitivity to color relates to statistical regularities of color in natural scenes. We illustrate the contribution of these findings to understanding the development of perceptual skills such as discrimination, categorization, and constancy. We also discuss the relevance of the findings for broader questions about perceptual development and identify directions for research.
Collapse
Affiliation(s)
- Alice E Skelton
- The Sussex Colour Group & Baby Lab School of Psychology University of Sussex Brighton UK
| | - John Maule
- The Sussex Colour Group & Baby Lab School of Psychology University of Sussex Brighton UK
| | - Anna Franklin
- The Sussex Colour Group & Baby Lab School of Psychology University of Sussex Brighton UK
| |
Collapse
|
7
|
Singh V, Burge J, Brainard DH. Equivalent noise characterization of human lightness constancy. J Vis 2022; 22:2. [PMID: 35394508 PMCID: PMC8994201 DOI: 10.1167/jov.22.5.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 02/19/2022] [Indexed: 12/03/2022] Open
Abstract
A goal of visual perception is to provide stable representations of task-relevant scene properties (e.g. object reflectance) despite variation in task-irrelevant scene properties (e.g. illumination and reflectance of other nearby objects). To study such stability in the context of the perceptual representation of lightness, we introduce a threshold-based psychophysical paradigm. We measure how thresholds for discriminating the achromatic reflectance of a target object (task-relevant property) in rendered naturalistic scenes are impacted by variation in the reflectance functions of background objects (task-irrelevant property), using a two-alternative forced-choice paradigm in which the reflectance of the background objects is randomized across the two intervals of each trial. We control the amount of background reflectance variation by manipulating a statistical model of naturally occurring surface reflectances. For low background object reflectance variation, discrimination thresholds were nearly constant, indicating that observers' internal noise determines threshold in this regime. As background object reflectance variation increases, its effects start to dominate performance. A model based on signal detection theory allows us to express the effects of task-irrelevant variation in terms of the equivalent noise, that is relative to the intrinsic precision of the task-relevant perceptual representation. The results indicate that although naturally occurring background object reflectance variation does intrude on the perceptual representation of target object lightness, the effect is modest - within a factor of two of the equivalent noise level set by internal noise.
Collapse
Affiliation(s)
- Vijay Singh
- Department of Physics, North Carolina Agricultural and Technical State University, Greensboro, NC, USA
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, USA
| | - Johannes Burge
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - David H Brainard
- Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
8
|
Abstract
Lightness perception is the perception of achromatic surface colors: black, white, and shades of grey. Lightness has long been a central research topic in experimental psychology, as perceiving surface color is an important visual task but also a difficult one due to the deep ambiguity of retinal images. In this article, I review psychophysical work on lightness perception in complex scenes over the past 20 years, with an emphasis on work that supports the development of computational models. I discuss Bayesian models, equivalent illumination models, multidimensional scaling, anchoring theory, spatial filtering models, natural scene statistics, and related work in computer vision. I review open topics in lightness perception that seem ready for progress, including the relationship between lightness and brightness, and developing more sophisticated computational models of lightness in complex scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Richard F Murray
- Department of Psychology and Centre for Vision Research, York University, Toronto M3J 1P3, Canada;
| |
Collapse
|
9
|
Morimoto T, Kusuyama T, Fukuda K, Uchikawa K. Human color constancy based on the geometry of color distributions. J Vis 2021; 21:7. [PMID: 33661281 PMCID: PMC7937993 DOI: 10.1167/jov.21.3.7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 01/14/2021] [Indexed: 11/26/2022] Open
Abstract
The physical inputs to our visual system are dictated by the interplay between lights and surfaces; thus, for surface color to be stably perceived, the influence of the illuminant must be discounted. To reveal our strategy to infer the illuminant color, we conducted three psychophysical experiments designed to test our optimal color hypothesis that we internalize the physical color gamut under various illuminants and apply the prior to estimate the illuminant color. In each experiment, we presented 61 hexagons arranged without spatial gaps, where the surrounding 60 hexagons were set to have a specific shape in their color distribution. We asked participants to adjust the color of a center test field so that it appeared to be a full-white surface placed under a test illuminant. Results and computational modeling suggested that, although our proposed model is limited in accounting for estimation of illuminant intensity by human observers, it agrees fairly well with the estimates of illuminant chromaticity in most tested conditions. The accuracy of estimation generally outperformed other tested conventional color constancy models. These results support the hypothesis that our visual system can utilize the geometry of scene color distribution to achieve color constancy.
Collapse
Affiliation(s)
- Takuma Morimoto
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Takahiro Kusuyama
- Department of Information Processing, Tokyo Institute of Technology, Yokohama, Japan
| | - Kazuho Fukuda
- Department of Information Design, Kogakuin University, Tokyo, Japan
| | - Keiji Uchikawa
- Human Media Research Center, Kanagawa Institute of Technology, Atsugi, Japan
| |
Collapse
|
10
|
Kawasaki Y, Reid JN, Ikeda K, Liu M, Karlsson BSA. Color Judgments of #The Dress and #The Jacket in a Sample of Different Cultures. Perception 2021; 50:216-230. [PMID: 33601952 DOI: 10.1177/0301006621991320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Two viral photographs, #The Dress and #The Jacket, have received recent attention in research on perception as the colors in these photos are ambiguous. In the current study, we examined perception of these photographs across three different cultural samples: Sweden (Western culture), China (Eastern culture), and India (between Western and Eastern cultures). Participants also answered questions about gender, age, morningness, and previous experience of the photographs. Analyses revealed that only age was a significant predictor for the perception of The Dress, as older people were more likely to perceive the colors as blue and black than white and gold. In contrast, multiple factors predicted perception of The Jacket, including age, previous experience, and country. Consistent with some previous research, this suggests that the perception of The Jacket is a different phenomenon from perception of The Dress and is influenced by additional factors, most notably culture.
Collapse
Affiliation(s)
- Yayoi Kawasaki
- Waseda University, Japan.,RISE Research Institutes of Sweden, Sweden
| | - J Nick Reid
- Western University, Canada.,RISE Research Institutes of Sweden, Sweden
| | - Kazuhiro Ikeda
- Shokei Gakuin University, Japan.,RISE Research Institutes of Sweden, Sweden
| | - Meiling Liu
- DIS - Study Abroad in Scandinavia, Sweden.,RISE Research Institutes of Sweden, Sweden
| | | |
Collapse
|
11
|
Rosen C, Tufano M, Humpston CS, Chase KA, Jones N, Abramowitz AC, Franco Chakkalakal A, Sharma RP. The Sensory and Perceptual Scaffolding of Absorption, Inner Speech, and Self in Psychosis. Front Psychiatry 2021; 12:649808. [PMID: 34045979 PMCID: PMC8145281 DOI: 10.3389/fpsyt.2021.649808] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 04/09/2021] [Indexed: 12/05/2022] Open
Abstract
This study examines the interconnectedness between absorption, inner speech, self, and psychopathology. Absorption involves an intense focus and immersion in mental imagery, sensory/perceptual stimuli, or vivid imagination that involves decreased self-awareness and alterations in consciousness. In psychosis, the dissolution and permeability in the demarcation between self and one's sensory experiences and perceptions, and also between self-other and/or inter-object boundaries alter one's sense of self. Thus, as the individual integrates these changes new "meaning making" or understanding evolves as part of an ongoing inner dialogue and dialogue with others. This study consisted of 117 participants: 81 participants with psychosis and 36 controls. We first conducted a bivariate correlation to elucidate the relationship between absorption and inner speech. We next conducted hierarchical multiple regressions to examine the effect of absorption and inner speech to predict psychopathology. Lastly, we conducted a network analysis and applied extended Bayesian Information Criterion to select the best model. We showed that in both the control and psychosis group dialogic and emotional/motivational types of inner speech were strongly associated with absorption subscales, apart from the aesthetic subscale in the control group which was not significant, while in psychosis, condensed inner speech was uniquely associated with increased imaginative involvement. In psychosis, we also demonstrated that altered consciousness, dialogic, and emotional/motivational inner speech all predicted positive symptoms. In terms of network associations, imaginative involvement was the most central, influential, and most highly predictive node in the model from which all other nodes related to inner speech and psychopathology are connected. This study shows a strong interrelatedness between absorption, inner speech and psychosis thus identifying potentially fertile ground for future research and directions, particularly in the exploration into the underlying construct of imaginative involvement in psychotic symptoms.
Collapse
Affiliation(s)
- Cherise Rosen
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, United States
| | - Michele Tufano
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, United States
| | - Clara S Humpston
- School of Psychology, Institute for Mental Health, University of Birmingham, Birmingham, United Kingdom
| | - Kayla A Chase
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, United States
| | - Nev Jones
- Department of Psychiatry, University of South Florida, Tampa, FL, United States
| | - Amy C Abramowitz
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, United States
| | | | - Rajiv P Sharma
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, United States
| |
Collapse
|
12
|
Yildiz GY, Sperandio I, Kettle C, Chouinard PA. Interocular transfer effects of linear perspective cues and texture gradients in the perceptual rescaling of size. Vision Res 2020; 179:19-33. [PMID: 33276195 DOI: 10.1016/j.visres.2020.11.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 10/30/2020] [Accepted: 11/13/2020] [Indexed: 12/29/2022]
Abstract
Our objective was to determine whether the influence of linear perspective cues and texture gradients in the perceptual rescaling of stimulus size transfers from one eye to the other. In experiment 1, we systematically added linear perspective cues and texture gradients in a background image of the corridor illusion. To determine whether perceptual size rescaling takes place at earlier or later stages, we tested how the perceived size of top and bottom rings changed under binocular (rings and background presented to both eyes), monocular (rings and background presented to the dominant eye only), and dichoptic (rings and background presented separately to the dominant and nondominant eyes, respectively) viewing conditions. We found differences between viewing conditions in the perceived size of the rings when linear perspective cues, but not texture gradients, were presented. Specifically, linear perspective cues produced a stronger illusion under the monocular compared to the dichoptic viewing condition. Hence, there was partial interocular transfer from the linear perspective cues, suggesting a dominant role of monocular neural populations in mediating the corridor illusion. In experiment 2, we repeated similar procedures with a more traditional Ponzo illusion background. Contrary to findings from experiment 1, there was a full interocular transfer with the presence of the converging lines, suggesting a dominant role of binocular neural populations. We conclude that higher order visual areas, which contain binocular neural populations, are more involved in the perceptual rescaling of size evoked by linear perspective cues in the Ponzo compared to the corridor illusion.
Collapse
Affiliation(s)
- Gizem Y Yildiz
- Department of Psychology and Counselling, School of Psychology and Public Health, La Trobe University, Melbourne, Australia
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Christine Kettle
- Department of Pharmacy and Biomedical Sciences, School of Molecular Sciences, La Trobe University, Melbourne, Australia
| | - Philippe A Chouinard
- Department of Psychology and Counselling, School of Psychology and Public Health, La Trobe University, Melbourne, Australia.
| |
Collapse
|
13
|
Wedge-Roberts R, Aston S, Beierholm U, Kentridge R, Hurlbert A, Nardini M, Olkkonen M. Specular highlights improve color constancy when other cues are weakened. J Vis 2020; 20:4. [PMID: 33170203 PMCID: PMC7674000 DOI: 10.1167/jov.20.12.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 09/07/2020] [Indexed: 11/24/2022] Open
Abstract
Previous studies suggest that to achieve color constancy, the human visual system makes use of multiple cues, including a priori assumptions about the illumination ("daylight priors"). Specular highlights have been proposed to aid constancy, but the evidence for their usefulness is mixed. Here, we used a novel cue-combination approach to test whether the presence of specular highlights or the validity of a daylight prior improves illumination chromaticity estimates, inferred from achromatic settings, to determine whether and under which conditions either cue contributes to color constancy. Observers made achromatic settings within three-dimensional rendered scenes containing matte or glossy shapes, illuminated by either daylight or nondaylight illuminations. We assessed both the variability of these settings and their accuracy, in terms of the standard color constancy index (CCI). When a spectrally uniform background was present, neither CCIs nor variability improved with specular highlights or daylight illuminants (Experiment 1). When a Mondrian background was introduced, CCIs decreased overall but were higher for scenes containing glossy, as opposed to matte, shapes (Experiments 2 and 3). There was no overall reduction in variability of settings and no benefit for scenes illuminated by daylights. Taken together, these results suggest that the human visual system indeed uses specular highlights to improve color constancy but only when other cues, such as from the local surround, are weakened.
Collapse
Affiliation(s)
| | - Stacey Aston
- Department of Psychology, Durham University, Durham, UK
| | | | - Robert Kentridge
- Department of Psychology, Durham University, Durham, UK
- Azrieli Programme in Brain, Mind & Consciousnesses, Canadian Institute for Advanced Research, Toronto, Canada
| | - Anya Hurlbert
- Neuroscience, Institute of Biosciences, Newcastle University, Newcastle, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| | - Maria Olkkonen
- Department of Psychology, Durham University, Durham, UK
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
14
|
Aston S, Denisova K, Hurlbert A, Olkkonen M, Pearce B, Rudd M, Werner A, Xiao B. Exploring the Determinants of Color Perception Using #Thedress and Its Variants: The Role of Spatio-Chromatic Context, Chromatic Illumination, and Material-Light Interaction. Perception 2020; 49:1235-1251. [PMID: 33183137 PMCID: PMC7672784 DOI: 10.1177/0301006620963808] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Accepted: 08/30/2020] [Indexed: 11/27/2022]
Abstract
The colors that people see depend not only on the surface properties of objects but also on how these properties interact with light as well as on how light reflected from objects interacts with an individual's visual system. Because individual visual systems vary, the same visual stimulus may elicit different perceptions from different individuals. #thedress phenomenon drove home this point: different individuals viewed the same image and reported it to be widely different colors: blue and black versus white and gold. This phenomenon inspired a collection of demonstrations presented at the Vision Sciences Society 2015 Meeting which showed how spatial and temporal manipulations of light spectra affect people's perceptions of material colors and illustrated the variability in individual color perception. The demonstrations also explored the effects of temporal alterations in metameric lights, including Maxwell's Spot, an entoptic phenomenon. Crucially, the demonstrations established that #thedress phenomenon occurs not only for images of the dress but also for the real dress under real light sources of different spectral composition and spatial configurations.
Collapse
Affiliation(s)
| | - Kristina Denisova
- Columbia University Irving Medical Center, United States; New York State Psychiatric Institute, United States; Teachers College Columbia University, United States
| | | | | | | | | | - Annette Werner
- Max Planck Institute for Biological Cybernetics, Germany
| | - Bei Xiao
- American University, United States
| |
Collapse
|
15
|
Abstract
Previous research has shown that the typical or memory color of an object is perceived in images of that object, even when the image is achromatic. We performed an experiment to investigate whether the implied color in greyscale images could influence the perceived color of subsequent, simple stimuli. We used a standard top-up adaptation technique along with a roving-pedestal, two-alternative spatial forced-choice method for measuring perceptual bias without contamination from any response or decision biases. Adaptors were achromatic images of natural objects that are normally seen with diagnostic color. We found that, in some circumstances, greyscale adapting images had a biasing effect, shifting the achromatic point toward the implied color, in comparison with phase-scrambled images. We interpret this effect as evidence of adaptation in chromatic signaling mechanisms that receive top-down input from knowledge of object color. This implied color adaptation effect was particularly strong from images of bananas, which are popular stimuli in memory color experiments. We also consider the effect in a color constancy context, in which the implied color is used by the visual system to estimate an illuminant, but find our results inconsistent with this explanation.
Collapse
Affiliation(s)
- R. J. Lee
- School of Psychology, University of Lincoln, Lincoln, UK
| | - G. Mather
- School of Psychology, University of Lincoln, Lincoln, UK
| |
Collapse
|
16
|
Nascimento SMC, Pastilha RC, Brenner E. Neighboring chromaticity influences how white a surface looks. Vision Res 2019; 165:31-35. [PMID: 31622903 DOI: 10.1016/j.visres.2019.09.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 09/13/2019] [Accepted: 09/25/2019] [Indexed: 11/19/2022]
Abstract
To identify surface properties independently of the illumination the visual system must make assumptions about the statistics of scenes and their illumination. Are assumptions about the intensity of the illumination independent of assumptions about its chromaticity? To find out, we asked participants to judge whether test patches within three different sets of surrounding surfaces were white or grey. Two sets were matched in terms of their maximal luminance, their mean luminance and chromaticity, and the variability in their luminance and chromaticity, but differed in how luminance and chromaticity were associated: the highest luminance was either associated with colorful surfaces or with achromatic ones. We found that test patches had to have a higher luminance to appear white when the highest luminance in the surrounding was associated with colorful surfaces. This makes sense if one considers that being colorful implies that a surface only reflects part of the light that falls on it, meaning that the illumination must have a higher luminance (a perfectly white surface reflects all of the light falling on it). In the third set, the colorful surfaces had the same luminance as in the set in which they were associated with the highest luminance, but the achromatic surfaces had a lower luminance so that the overall mean luminance was lower. Despite the constraints on the illumination being identical, test patches did not have to have as high luminance to appear white for the third set. Considering the layout of the surfaces in the surrounding revealed that test patches did have to have the same high luminance if the high luminance colorful surfaces were adjacent to the target patch. Thus, the assumptions about the possible illumination are applied locally. A possible mechanism is relying on the contrast within each type of cone: for a surface to appear white it must stimulate each of the three kinds of cones substantially more than do any neighboring surfaces.
Collapse
Affiliation(s)
| | - Ruben C Pastilha
- Centre of Physics, Gualtar Campus, University of Minho, 4710-057 Braga, Portugal
| | - Eli Brenner
- Faculty of Human Movement Sciences, VU University, Amsterdam, The Netherlands
| |
Collapse
|
17
|
Abstract
Smooth pursuit eye movements maintain the line of sight on smoothly moving targets. Although often studied as a response to sensory motion, pursuit anticipates changes in motion trajectories, thus reducing harmful consequences due to sensorimotor processing delays. Evidence for predictive pursuit includes (a) anticipatory smooth eye movements (ASEM) in the direction of expected future target motion that can be evoked by perceptual cues or by memory for recent motion, (b) pursuit during periods of target occlusion, and (c) improved accuracy of pursuit with self-generated or biologically realistic target motions. Predictive pursuit has been linked to neural activity in the frontal cortex and in sensory motion areas. As behavioral and neural evidence for predictive pursuit grows and statistically based models augment or replace linear systems approaches, pursuit is being regarded less as a reaction to immediate sensory motion and more as a predictive response, with retinal motion serving as one of a number of contributing cues.
Collapse
Affiliation(s)
- Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| | - Elio M Santos
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , , .,Current affiliation: Department of Psychology, State University of New York, College at Oneonta, Oneonta, New York 13820, USA;
| | - Jie Wang
- Department of Psychology, Rutgers University, Piscataway, New Jersey 08854, USA; , ,
| |
Collapse
|
18
|
Abstract
Quite a few cognitive scientists are working toward a naturalization of phenomenology. Looking more closely at the relevant literature, however, the ‘naturalizing phenomenology’ proposals show the presence of different conceptions, assumptions, and formalisms, further differentiated by different philosophical and/or scientific concerns. This paper shows that the original Husserlian stance is deeper, clearer and more advanced than most supposed contemporary improvements. The recent achievements of experimental phenomenology show how to ‘naturalize’ phenomenology without destroying the guiding assumptions of phenomenology. The requirements grounding the scientific explanation of subjective experience are discussed, such as the nature of the stimuli, their variables, and their manipulation by properly phenomenological methods.
Collapse
Affiliation(s)
- Liliana Albertazzi
- Laboratory of Experimental Phenomenology, Department of Humanities, University of Trento, Trento, Italy
| |
Collapse
|
19
|
Egger SW, Jazayeri M. A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise. Sci Rep 2018; 8:12597. [PMID: 30135441 PMCID: PMC6105733 DOI: 10.1038/s41598-018-30722-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 08/02/2018] [Indexed: 11/14/2022] Open
Abstract
Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high.
Collapse
Affiliation(s)
- Seth W Egger
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA. .,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
20
|
Młynarski WF, Hermundstad AM. Adaptive coding for dynamic sensory inference. eLife 2018; 7:32055. [PMID: 29988020 PMCID: PMC6039184 DOI: 10.7554/elife.32055] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 04/11/2018] [Indexed: 12/30/2022] Open
Abstract
Behavior relies on the ability of sensory systems to infer properties of the environment from incoming stimuli. The accuracy of inference depends on the fidelity with which behaviorally relevant properties of stimuli are encoded in neural responses. High-fidelity encodings can be metabolically costly, but low-fidelity encodings can cause errors in inference. Here, we discuss general principles that underlie the tradeoff between encoding cost and inference error. We then derive adaptive encoding schemes that dynamically navigate this tradeoff. These optimal encodings tend to increase the fidelity of the neural representation following a change in the stimulus distribution, and reduce fidelity for stimuli that originate from a known distribution. We predict dynamical signatures of such encoding schemes and demonstrate how known phenomena, such as burst coding and firing rate adaptation, can be understood as hallmarks of optimal coding for accurate inference.
Collapse
Affiliation(s)
- Wiktor F Młynarski
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, United States
| |
Collapse
|
21
|
Witzel C, Olkkonen M, Gegenfurtner KR. A Bayesian Model of the Memory Colour Effect. Iperception 2018; 9:2041669518771715. [PMID: 29760874 PMCID: PMC5946617 DOI: 10.1177/2041669518771715] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 03/28/2018] [Indexed: 11/30/2022] Open
Abstract
According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.
Collapse
Affiliation(s)
| | - Maria Olkkonen
- Department of Psychology, Durham University, Durham, UK; Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | | |
Collapse
|
22
|
Vemuri K, Srivastava A, Agrawal S, Anand M. Age, pupil size differences, and color choices for the "dress" and the "jacket". JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:B347-B355. [PMID: 29603963 DOI: 10.1364/josaa.35.00b347] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 03/07/2018] [Indexed: 06/08/2023]
Abstract
The color identification responses to photographs of #thedress (white/gold and blue/black) and a jacket (white/blue and green/black, and teal) reveal obvious individual differences in color perception. To explore possible association between pupil size/retinal illuminance and color perception, we recorded the pupil diameters of participants shown 22 uniformly colored (generated from the RGB values using a laptop LCD display) screens followed by photographs of #thedress and jacket. We analyzed (a) pupil size difference in the color groups and (b) age-related pupil size and/or reflex change and its influence on color perception. The data confirms that the average pupil size of the white/gold group was significantly less than the blue/black group for the dress. The pupil size difference between the color groups is slightly higher in the 21-30-year and 31-55-year age groups but not in the 18-20-year age group, while a similar variance was not observed for the jacket color groups. Interestingly, the average pupil size of both color groups was smaller for the dress compared to the baseline (collected with a gray hue displayed on the screen), whereas an opposite effect was observed for the jacket. The contrasting results for the two photographs do not allow for a strong inference of only pupil size change principal for differences in color perception. But, a probable explanation of the pupil size difference could be the subjective variation in the perceptual interpretation of illumination cues in the photographs.
Collapse
|
23
|
Abstract
Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior - rather than assessing optimality per se - should be among the major goals of the science of perceptual decision making.
Collapse
Affiliation(s)
- Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA 30332.
| | - Rachel N Denison
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003.
| |
Collapse
|
24
|
Lafer-Sousa R, Conway BR. #TheDress: Categorical perception of an ambiguous color image. J Vis 2017; 17:25. [PMID: 29090319 PMCID: PMC5672910 DOI: 10.1167/17.12.25] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 08/10/2017] [Indexed: 12/02/2022] Open
Abstract
We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region.
Collapse
Affiliation(s)
- Rosa Lafer-Sousa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, National Eye Institute, and National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
25
|
Abstract
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Collapse
Affiliation(s)
- Andrey Chetverikov
- Laboratory for Visual Perception and Visuomotor Control, Faculty of Psychology, School of Health Sciences, University of Iceland
- Cognitive Research Lab, Russian Presidential Academy of National Economy and Public Administration
- Department of Psychology, Saint Petersburg State University
| | - Gianluca Campana
- Dipartimento di Psicologia Generale, Università degli Studi di Padova
- Human Inspired Technology Research Centre, Università degli Studi di Padova
| | - Árni Kristjánsson
- Laboratory for Visual Perception and Visuomotor Control, Faculty of Psychology, School of Health Sciences, University of Iceland
| |
Collapse
|
26
|
Karlsson BSA, Allwood CM. What Is the Correct Answer about The Dress' Colors? Investigating the Relation between Optimism, Previous Experience, and Answerability. Front Psychol 2016; 7:1808. [PMID: 27933007 PMCID: PMC5120099 DOI: 10.3389/fpsyg.2016.01808] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Accepted: 11/02/2016] [Indexed: 11/13/2022] Open
Abstract
The Dress photograph, first displayed on the internet in 2015, revealed stunning individual differences in color perception. The aim of this study was to investigate if lay-persons believed that the question about The Dress colors was answerable. Past research has found that optimism is related to judgments of how answerable knowledge questions with controversial answers are (Karlsson et al., 2016). Furthermore, familiarity with a question can create a feeling of knowing the answer (Reder and Ritter, 1992). Building on these findings, 186 participants saw the photo of The Dress and were asked about the correct answer to the question about The Dress’ colors (“blue and black,” “white and gold,” “other, namely…,” or “there is no correct answer”). Choice of the alternative “there is no correct answer” was interpreted as believing the question was not answerable. This answer was chosen more often by optimists and by people who reported they had not seen The Dress before. We also found that among participants who had seen The Dress photo before, 19%, perceived The Dress as “white and gold” but believed that the correct answer was “blue and black.” This, in analogy to previous findings about non-believed memories (Scoboria and Pascal, 2016), shows that people sometimes do not believe the colors they have perceived are correct. Our results suggest that individual differences related to optimism and previous experience may contribute to if the judgment of the individual perception of a photograph is enough to serve as a decision basis for valid conclusions about colors. Further research about color judgments under ambiguous circumstances could benefit from separating individual perceptual experience from beliefs about the correct answer to the color question. Including the option “there is no correct answer” may also be beneficial.
Collapse
|
27
|
Adams RA, Bauer M, Pinotsis D, Friston KJ. Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG. Neuroimage 2016; 132:175-189. [PMID: 26921713 PMCID: PMC4862965 DOI: 10.1016/j.neuroimage.2016.02.055] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Revised: 02/15/2016] [Accepted: 02/17/2016] [Indexed: 01/06/2023] Open
Abstract
This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia. The brain encodes states of the world probabilistically with means and precisions. Precision (inverse variance) may be encoded by the synaptic gain of pyramidal cells. We estimate subjects' sensory precision using a model of oculomotor pursuit and DCM. We estimate subjects' synaptic gain in V1 using DCM of MEG data during pursuit. Estimates of synaptic gain in V1 and sensory precision are significantly correlated.
Collapse
Affiliation(s)
- Rick A Adams
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Markus Bauer
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK; School of Psychology, University Park, Nottingham University, Nottingham, NG7 2RD, UK.
| | - Dimitris Pinotsis
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| |
Collapse
|
28
|
Lafer-Sousa R, Hermann KL, Conway BR. Striking individual differences in color perception uncovered by 'the dress' photograph. Curr Biol 2015; 25:R545-6. [PMID: 25981795 PMCID: PMC4921196 DOI: 10.1016/j.cub.2015.04.053] [Citation(s) in RCA: 102] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
'The dress' is a peculiar photograph: by themselves the dress' pixels are brown and blue, colors associated with natural illuminants, but popular accounts (#TheDress) suggest the dress appears either white/gold or blue/black. Could the purported categorical perception arise because the original social-media question was an alternative-forced-choice? In a free-response survey (N = 1401), we found that most people, including those naïve to the image, reported white/gold or blue/black, but some said blue/brown. Reports of white/gold over blue/black were higher among older people and women. On re-test, some subjects reported a switch in perception, showing the image can be multistable. In a language-independent measure of perception, we asked subjects to identify the dress' colors from a complete color gamut. The results showed three peaks corresponding to the main descriptive categories, providing additional evidence that the brain resolves the image into one of three stable percepts. We hypothesize that these reflect different internal priors: some people favor a cool illuminant (blue sky), discount shorter wavelengths, and perceive white/gold; others favor a warm illuminant (incandescent light), discount longer wavelengths, and see blue/black. The remaining subjects may assume a neutral illuminant, and see blue/brown. We show that by introducing overt cues to the illumination, we can flip the dress color.
Collapse
Affiliation(s)
- Rosa Lafer-Sousa
- Department of Brain and Cognitive Sciences, MIT, Cambridge MA 02139
| | - Katherine L. Hermann
- Neuroscience Program, Wellesley College, Wellesley MA, 02481; Department of Brain and Cognitive Sciences, MIT, Cambridge MA 02139
| | - Bevil R. Conway
- Neuroscience Program, Wellesley College, Wellesley MA, 02481; Department of Brain and Cognitive Sciences, MIT, Cambridge MA 02139
| |
Collapse
|
29
|
Fukuda K, Uchikawa K. Color constancy in a scene with bright colors that do not have a fully natural surface appearance. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2014; 31:A239-A246. [PMID: 24695177 DOI: 10.1364/josaa.31.00a239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Theoretical and experimental approaches have proposed that color constancy involves a correction related to some average of stimulation over the scene, and some of the studies showed that the average gives greater weight to surrounding bright colors. However, in a natural scene, high-luminance elements do not necessarily carry information about the scene illuminant when the luminance is too high for it to appear as a natural object color. The question is how a surrounding color's appearance mode influences its contribution to the degree of color constancy. Here the stimuli were simple geometric patterns, and the luminance of surrounding colors was tested over the range beyond the luminosity threshold. Observers performed perceptual achromatic setting on the test patch in order to measure the degree of color constancy and evaluated the surrounding bright colors' appearance mode. Broadly, our results support the assumption that the visual system counts only the colors in the object-color appearance for color constancy. However, detailed analysis indicated that surrounding colors without a fully natural object-color appearance had some sort of influence on color constancy. Consideration of this contribution of unnatural object color might be important for precise modeling of human color constancy.
Collapse
|
30
|
Olkkonen M, Allred SR. Short-term memory affects color perception in context. PLoS One 2014; 9:e86488. [PMID: 24475131 PMCID: PMC3903542 DOI: 10.1371/journal.pone.0086488] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Accepted: 12/10/2013] [Indexed: 11/23/2022] Open
Abstract
Color-based object selection - for instance, looking for ripe tomatoes in the market - places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly.
Collapse
Affiliation(s)
- Maria Olkkonen
- Department of Psychology, Rutgers – The State University of New Jersey, Camden, New Jersey, United States of America
| | - Sarah R. Allred
- Department of Psychology, Rutgers – The State University of New Jersey, Camden, New Jersey, United States of America
| |
Collapse
|
31
|
McCann JJ, Parraman C, Rizzi A. Reflectance, illumination, and appearance in color constancy. Front Psychol 2014; 5:5. [PMID: 24478738 PMCID: PMC3901009 DOI: 10.3389/fpsyg.2014.00005] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Accepted: 01/04/2014] [Indexed: 11/30/2022] Open
Abstract
We studied color constancy using a pair of identical 3-D Color Mondrian displays. We viewed one 3-D Mondrian in nearly uniform illumination, and the other in directional, nonuniform illumination. We used the three dimensional structures to modulate the light falling on the painted surfaces. The 3-D structures in the displays were a matching set of wooden blocks. Across Mondrian displays, each corresponding facet had the same paint on its surface. We used only 6 chromatic, and 5 achromatic paints applied to 104 block facets. The 3-D blocks add shadows and multiple reflections not found in flat Mondrians. Both 3-D Mondrians were viewed simultaneously, side-by-side. We used two techniques to measure correlation of appearance with surface reflectance. First, observers made magnitude estimates of changes in the appearances of identical reflectances. Second, an author painted a watercolor of the 3-D Mondrians. The watercolor's reflectances quantified the changes in appearances. While constancy generalizations about illumination and reflectance hold for flat Mondrians, they do not for 3-D Mondrians. A constant paint does not exhibit perfect color constancy, but rather shows significant shifts in lightness, hue and chroma in response to the structure in the nonuniform illumination. Color appearance depends on the spatial information in both the illumination and the reflectances of objects. The spatial information of the quanta catch from the array of retinal receptors generates sensations that have variable correlation with surface reflectance. Models of appearance in humans need to calculate the departures from perfect constancy measured here. This article provides a dataset of measurements of color appearances for computational models of sensation.
Collapse
Affiliation(s)
| | - Carinna Parraman
- Centre for Fine Print Research, University of the West of England Bristol, UK
| | - Alessandro Rizzi
- Dipartimento di Informatica, Università degli Studi di Milano Milano, Italy
| |
Collapse
|
32
|
Lucassen MP, Gevers T, Gijsenij A, Dekker N. Effects of chromatic image statistics on illumination induced color differences. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2013; 30:1871-1884. [PMID: 24323269 DOI: 10.1364/josaa.30.001871] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
Collapse
|
33
|
Radonjić A, Gilchrist AL. Depth effect on lightness revisited: The role of articulation, proximity and fields of illumination. Iperception 2013; 4:437-55. [PMID: 24349701 PMCID: PMC3859559 DOI: 10.1068/i0575] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2012] [Revised: 07/29/2013] [Indexed: 11/09/2022] Open
Abstract
The coplanar ratio principle proposes that when the luminance range in an image is larger than the canonical reflectance range of 30:1, the lightness of a target surface depends on the luminance ratio between that target and its adjacent coplanar neighbor (Gilchrist, 1980). This conclusion is based on experiments in which changes in the perceived target depth produced large changes in its perceived lightness without significantly altering the observers' retinal image. Using the same paradigm, we explored how this depth effect on lightness depends on display complexity (articulation), proximity of the target to its highest coplanar luminance and spatial distribution of fields of illumination. Importantly, our experiments allowed us to test differing predictions made by the anchoring theory (Gilchrist et al., 1999), the coplanar ratio principle, as well as other models. We report three main findings, generally consistent with anchoring theory predictions: (1) Articulation can substantially increase the depth effect. (2) Target lightness depends not on the adjacent luminance but on the highest coplanar luminance, irrespective of its position relative to the target. (3) When a plane contains multiple fields of illumination, target lightness depends on the highest luminance in its field of illumination, not on the highest coplanar luminance.
Collapse
Affiliation(s)
- Ana Radonjić
- Department of Psychology, University of Pennsylvania, 3401 Walnut St, Philadelphia, PA 19104, USA; e-mail:
| | - Alan L Gilchrist
- Department of Psychology, Rutgers University, 101 Warren St, Newark, NJ 07102, USA; e-mail:
| |
Collapse
|
34
|
Allred SR, Brainard DH. A Bayesian model of lightness perception that incorporates spatial variation in the illumination. J Vis 2013; 13:18. [PMID: 23814073 PMCID: PMC3697904 DOI: 10.1167/13.7.18] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 03/19/2013] [Indexed: 11/24/2022] Open
Abstract
The lightness of a test stimulus depends in a complex manner on the context in which it is viewed. To predict lightness, it is necessary to leverage measurements of a feasible number of contextual configurations into predictions for a wider range of configurations. Here we pursue this goal, using the idea that lightness results from the visual system's attempt to provide stable information about object surface reflectance. We develop a Bayesian algorithm that estimates both illumination and reflectance from image luminance, and link perceived lightness to the algorithm's estimates of surface reflectance. The algorithm resolves ambiguity in the image through the application of priors that specify what illumination and surface reflectances are likely to occur in viewed scenes. The prior distributions were chosen to allow spatial variation in both illumination and surface reflectance. To evaluate our model, we compared its predictions to a data set of judgments of perceived lightness of test patches embedded in achromatic checkerboards (Allred, Radonjić, Gilchrist, & Brainard, 2012). The checkerboard stimuli incorporated the large variation in luminance that is a pervasive feature of natural scenes. In addition, the luminance profile of the checks both near to and remote from the central test patches was systematically manipulated. The manipulations provided a simplified version of spatial variation in illumination. The model can account for effects of overall changes in image luminance and the dependence of such changes on spatial location as well as some but not all of the more detailed features of the data.
Collapse
Affiliation(s)
- Sarah R. Allred
- Department of Psychology, Rutgers, The State University of New Jersey, Camden, NJ, USA
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
35
|
Ma WJ. Organizing probabilistic models of perception. Trends Cogn Sci 2012; 16:511-8. [PMID: 22981359 DOI: 10.1016/j.tics.2012.08.010] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 08/22/2012] [Accepted: 08/22/2012] [Indexed: 10/27/2022]
Abstract
Probability has played a central role in models of perception for more than a century, but a look at probabilistic concepts in the literature raises many questions. Is being Bayesian the same as being optimal? Are recent Bayesian models fundamentally different from classic signal detection theory models? Do findings of near-optimal inference provide evidence that neurons compute with probability distributions? This review aims to disentangle these concepts and to classify empirical evidence accordingly.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA.
| |
Collapse
|
36
|
McCollum G, Klam F, Graf W. Face-infringement space: the frame of reference of the ventral intraparietal area. BIOLOGICAL CYBERNETICS 2012; 106:219-239. [PMID: 22653480 DOI: 10.1007/s00422-012-0491-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2010] [Accepted: 05/02/2012] [Indexed: 06/01/2023]
Abstract
Experimental studies have shown that responses of ventral intraparietal area (VIP) neurons specialize in head movements and the environment near the head. VIP neurons respond to visual, auditory, and tactile stimuli, smooth pursuit eye movements, and passive and active movements of the head. This study demonstrates mathematical structure on a higher organizational level created within VIP by the integration of a complete set of variables covering face-infringement. Rather than positing dynamics in an a priori defined coordinate system such as those of physical space, we assemble neuronal receptive fields to find out what space of variables VIP neurons together cover. Section 1 presents a view of neurons as multidimensional mathematical objects. Each VIP neuron occupies or is responsive to a region in a sensorimotor phase space, thus unifying variables relevant to the disparate sensory modalities and movements. Convergence on one neuron joins variables functionally, as space and time are joined in relativistic physics to form a unified spacetime. The space of position and motion together forms a neuronal phase space, bridging neurophysiology and the physics of face-infringement. After a brief review of the experimental literature, the neuronal phase space natural to VIP is sequentially characterized, based on experimental data. Responses of neurons indicate variables that may serve as axes of neural reference frames, and neuronal responses have been so used in this study. The space of sensory and movement variables covered by VIP receptive fields joins visual and auditory space to body-bound sensory modalities: somatosensation and the inertial senses. This joining of allocentric and egocentric modalities is in keeping with the known relationship of the parietal lobe to the sense of self in space and to hemineglect, in both humans and monkeys. Following this inductive step, variables are formalized in terms of the mathematics of graph theory to deduce which combinations are complete as a multidimensional neural structure that provides the organism with a complete set of options regarding objects impacting the face, such as acceptance, pursuit, and avoidance. We consider four basic variable types: position and motion of the face and of an external object. Formalizing the four types of variables allows us to generalize to any sensory system and to determine the necessary and sufficient conditions for a neural center (for example, a cortical region) to provide a face-infringement space. We demonstrate that VIP includes at least one such face-infringement space.
Collapse
Affiliation(s)
- Gin McCollum
- Fariborz Maseeh Department of Mathematics and Statistics, Portland State University, PO Box 751, Portland, OR, 97207-751, USA.
| | | | | |
Collapse
|
37
|
Xiao B, Hurst B, MacIntyre L, Brainard DH. The color constancy of three-dimensional objects. J Vis 2012; 12:6. [PMID: 22508953 DOI: 10.1167/12.4.6] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Human color constancy has been studied for over 100 years, and there is extensive experimental data for the case where a spatially diffuse light source illuminates a set of flat matte surfaces. In natural viewing, however, three-dimensional objects are viewed in three-dimensional scenes. Little is known about color constancy for three-dimensional objects. We used a forced-choice task to measure the achromatic chromaticity of matte disks, matte spheres, and glossy spheres. In all cases, the test stimuli were viewed in the context of stereoscopically viewed graphics simulations of three-dimensional scenes, and we varied the scene illuminant. We studied conditions both where all cues were consistent with the simulated illuminant change (consistent-cue conditions) and where local contrast was silenced as a cue (reduced-cue conditions). We computed constancy indices from the achromatic chromaticities. To first order, constancy was similar for the three test object types. There was, however, a reliable interaction between test object type and cue condition. In the consistent-cue conditions, constancy tended to be best for the matte disks, while in the reduced-cue conditions constancy was best for the spheres. The presence of this interaction presents an important challenge for theorists who seek to generalize models that account for constancy for flat tests to the more general case of three-dimensional objects.
Collapse
Affiliation(s)
- Bei Xiao
- Graduate Program in Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | |
Collapse
|
38
|
Wozny DR, Shams L. Computational characterization of visually induced auditory spatial adaptation. Front Integr Neurosci 2011; 5:75. [PMID: 22069383 PMCID: PMC3208186 DOI: 10.3389/fnint.2011.00075] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2011] [Accepted: 10/17/2011] [Indexed: 11/29/2022] Open
Abstract
Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.
Collapse
Affiliation(s)
- David R Wozny
- Department of Otolaryngology, Oregon Health and Science University Portland, OR, USA
| | | |
Collapse
|
39
|
Girshick AR, Landy MS, Simoncelli EP. Cardinal rules: visual orientation perception reflects knowledge of environmental statistics. Nat Neurosci 2011; 14:926-32. [PMID: 21642976 PMCID: PMC3125404 DOI: 10.1038/nn.2831] [Citation(s) in RCA: 332] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2010] [Accepted: 04/05/2011] [Indexed: 12/02/2022]
Abstract
Humans are good at performing visual tasks, but experimental measurements have revealed substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. However, such inference is optimal only if the observer's internal model matches the environment. We found this to be the case. We measured performance in an orientation-estimation task and found that orientation judgments were more accurate at cardinal (horizontal and vertical) orientations. Judgments made under conditions of uncertainty were strongly biased toward cardinal orientations. We estimated observers' internal models for orientation and found that they matched the local orientation distribution measured in photographs. In addition, we determined how a neural population could embed probabilistic information responsible for such biases.
Collapse
Affiliation(s)
- Ahna R Girshick
- Department of Psychology, New York University, New York, New York, USA.
| | | | | |
Collapse
|
40
|
Brainard DH, Maloney LT. Surface color perception and equivalent illumination models. J Vis 2011; 11:11.5.1. [PMID: 21536727 DOI: 10.1167/11.5.1] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Vision provides information about the properties and identity of objects. The ease with which we perceive object properties belies the difficulty of the underlying information-processing task. In the case of object color, retinal information about object reflectance is confounded with information about the illumination as well as about the object's shape and pose. There is no obvious rule that allows transformation of the retinal image to a color representation that depends primarily on object surface reflectance. Under many circumstances, however, object color appearance is remarkably stable across scenes in which the object is viewed. Here, we review a line of experiments and theory that aim to understand how the visual system stabilizes object color appearance. Our emphasis is on models derived from explicit analysis of the computational problem of estimating the physical properties of illuminants and surfaces from the retinal image, and experiments that test these models. We argue that this approach has considerable promise for allowing generalization from simplified laboratory experiments to richer scenes that more closely approximate natural viewing. We discuss the relation between the work we review and other theoretical approaches available in the literature.
Collapse
Affiliation(s)
- David H Brainard
- Department of Psychology, University of Pennsylvania, Pennsylvania, PA, USA.
| | | |
Collapse
|
41
|
Geisler WS. Contributions of ideal observer theory to vision research. Vision Res 2011; 51:771-81. [PMID: 20920517 PMCID: PMC3062724 DOI: 10.1016/j.visres.2010.09.027] [Citation(s) in RCA: 140] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2010] [Revised: 09/10/2010] [Accepted: 09/23/2010] [Indexed: 10/19/2022]
Abstract
An ideal observer is a hypothetical device that performs optimally in a perceptual task given the available information. The theory of ideal observers has proven to be a powerful and useful tool in vision research, which has been applied to a wide range of problems. Here I first summarize the basic concepts and logic of ideal observer analysis and then briefly describe applications in a number of different areas, including pattern detection, discrimination and estimation, perceptual grouping, shape, depth and motion perception and visual attention, with an emphasis on recent applications. Given recent advances in mathematical statistics, in computational power, and in techniques for measuring behavioral performance, neural activity and natural scene statistics, it seems certain that ideal observer theory will play an ever increasing role in basic and applied areas of vision science.
Collapse
Affiliation(s)
- Wilson S Geisler
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, United States.
| |
Collapse
|
42
|
Abstract
The visual system is tasked with extracting stimulus content (e.g. the identity of an object) from the spatiotemporal light pattern falling on the retina. However, visual information can be ambiguous with regard to content (e.g. an object when viewed from far away), requiring the system to also consider contextual information. Additionally, visual information originating from the same content can differ (e.g. the same object viewed from different angles), requiring the system to extract content invariant to these differences. In this review, we explore these challenges from experimental and theoretical perspectives, and motivate the need to incorporate solutions for both ambiguity and invariance into hierarchical models of visual processing.
Collapse
|
43
|
Abstract
A quarter of a century ago, the first systematic behavioral experiments were performed to clarify the nature of color constancy-the effect whereby the perceived color of a surface remains constant despite changes in the spectrum of the illumination. At about the same time, new models of color constancy appeared, along with physiological data on cortical mechanisms and photographic colorimetric measurements of natural scenes. Since then, as this review shows, there have been many advances. The theoretical requirements for constancy have been better delineated and the range of experimental techniques has been greatly expanded; novel invariant properties of images and a variety of neural mechanisms have been identified; and increasing recognition has been given to the relevance of natural surfaces and scenes as laboratory stimuli. Even so, there remain many theoretical and experimental challenges, not least to develop an account of color constancy that goes beyond deterministic and relatively simple laboratory stimuli and instead deals with the intrinsically variable nature of surfaces and illuminations present in the natural world.
Collapse
Affiliation(s)
- David H Foster
- Department of Electrical and Electronic Engineering, University of Manchester, Sackville Street, Manchester, M13 9PL England, UK.
| |
Collapse
|
44
|
Huynh CP, Robles-Kelly A. A Solution of the Dichromatic Model for Multispectral Photometric Invariance. Int J Comput Vis 2010. [DOI: 10.1007/s11263-010-0333-y] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
45
|
Abstract
The function of the retina is crucial, for it must encode visual signals so the brain can detect objects in the visual world. However, the biological mechanisms of the retina add noise to the visual signal and therefore reduce its quality and capacity to inform about the world. Because an organism's survival depends on its ability to unambiguously detect visual stimuli in the presence of noise, its retinal circuits must have evolved to maximize signal quality, suggesting that each retinal circuit has a specific functional role. Here we explain how an ideal observer can measure signal quality to determine the functional roles of retinal circuits. In a visual discrimination task the ideal observer can measure from a neural response the increment threshold, the number of distinguishable response levels, and the neural code, which are fundamental measures of signal quality relevant to behavior. It can compare the signal quality in stimulus and response to determine the optimal stimulus, and can measure the specific loss of signal quality by a neuron's receptive field for non-optimal stimuli. Taking into account noise correlations, the ideal observer can track the signal-to-noise ratio available from one stage to the next, allowing one to determine each stage's role in preserving signal quality. A comparison between the ideal performance of the photon flux absorbed from the stimulus and actual performance of a retinal ganglion cell shows that in daylight a ganglion cell and its presynaptic circuit loses a factor of approximately 10-fold in contrast sensitivity, suggesting specific signal-processing roles for synaptic connections and other neural circuit elements. The ideal observer is a powerful tool for characterizing signal processing in single neurons and arrays along a neural pathway.
Collapse
Affiliation(s)
- Robert G Smith
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA 19104-6058, USA.
| | | |
Collapse
|
46
|
Brainard DH, Williams DR, Hofer H. Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots. J Vis 2008; 8:15.1-23. [PMID: 18842086 PMCID: PMC2671890 DOI: 10.1167/8.5.15] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2007] [Accepted: 01/06/2008] [Indexed: 11/24/2022] Open
Abstract
Observers use a wide range of color names, including white, to describe monochromatic flashes with a retinal size comparable to that of a single cone. We model such data as a consequence of information loss arising from trichromatic sampling. The model starts with the simulated responses of the individual L, M, and S cones actually present in the cone mosaic and uses these to reconstruct the L-, M-, and S-cone signals that were present at every image location. We incorporate the optics and the mosaic topography of individual observers, as well as the spatio-chromatic statistics of natural images. We simulated the experiment of H. Hofer, B. Singer, & D. R. Williams (2005) and predicted the color name on each simulated trial from the average chromaticity of the spot reconstructed by our model. Broad features of the data across observers emerged naturally as a consequence of the measured individual variation in the relative numbers of L, M, and S cones. The model's output is also consistent with the appearance of larger spots and of sinusoidal contrast modulations. Finally, the model makes testable predictions for future experiments that study how color naming varies with the fine structure of the retinal mosaic.
Collapse
Affiliation(s)
- David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | | | | |
Collapse
|
47
|
Pitkow X, Sompolinsky H, Meister M. A neural computation for visual acuity in the presence of eye movements. PLoS Biol 2008; 5:e331. [PMID: 18162043 PMCID: PMC2222970 DOI: 10.1371/journal.pbio.0050331] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2007] [Accepted: 11/09/2007] [Indexed: 11/19/2022] Open
Abstract
Humans can distinguish visual stimuli that differ by features the size of only a few photoreceptors. This is possible despite the incessant image motion due to fixational eye movements, which can be many times larger than the features to be distinguished. To perform well, the brain must identify the retinal firing patterns induced by the stimulus while discounting similar patterns caused by spontaneous retinal activity. This is a challenge since the trajectory of the eye movements, and consequently, the stimulus position, are unknown. We derive a decision rule for using retinal spike trains to discriminate between two stimuli, given that their retinal image moves with an unknown random walk trajectory. This algorithm dynamically estimates the probability of the stimulus at different retinal locations, and uses this to modulate the influence of retinal spikes acquired later. Applied to a simple orientation-discrimination task, the algorithm performance is consistent with human acuity, whereas naive strategies that neglect eye movements perform much worse. We then show how a simple, biologically plausible neural network could implement this algorithm using a local, activity-dependent gain and lateral interactions approximately matched to the statistics of eye movements. Finally, we discuss evidence that such a network could be operating in the primary visual cortex. Like a camera, the eye projects an image of the world onto our retina. But unlike a camera, the eye continues to execute small, random movements, even when we fix our gaze. Consequently, the projected image jitters over the retina. In a camera, such jitter leads to a blurred image on the film. Interestingly, our visual acuity is many times sharper than expected from the motion blur. Apparently, the brain uses an active process to track the image through its jittering motion across the retina. Here, we propose an algorithm for how this can be accomplished. The algorithm uses realistic spike responses of optic nerve fibers to reconstruct the visual image, and requires no knowledge of the eye movement trajectory. Its performance can account for human visual acuity. Furthermore, we show that this algorithm could be implemented biologically by the neural circuits of primary visual cortex. Even when we hold our gaze still, small eye movements jitter the visual image of the world across the retina. The authors show how a stable and sharp image might be recovered through neural processing in the visual cortex.
Collapse
Affiliation(s)
- Xaq Pitkow
- Program in Biophysics, Harvard University, Cambridge, Massachusetts, United States of America
| | - Haim Sompolinsky
- Racah Institute of Physics and Center for Neural Computation, Hebrew University, Jerusalem, Israel
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
| | - Markus Meister
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts, United States of America
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
48
|
Affiliation(s)
- Steven K. Shevell
- Departments of Psychology and Ophthalmology & Visual Science, University of Chicago, Chicago, Illinois 60637
| | | |
Collapse
|
49
|
Holly JE, McCollum G. Constructive perception of self-motion. J Vestib Res 2008; 18:249-66. [PMID: 19542599 PMCID: PMC3781936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This review focusses attention on a ragged edge of our knowledge of self-motion perception, where understanding ends but there are experimental results to indicate that present approaches to analysis are inadequate. Although self-motion perception displays processes of "top-down" construction, it is typically analyzed as if it is nothing more than a deformation of the stimulus, using a "bottom-up" and input/output approach beginning with the transduction of the stimulus. Analysis often focusses on the extent to which passive transduction of the movement stimulus is accurate. Some perceptual processes that deform or transform the stimulus arise from the way known properties of sensory receptors contribute to perceptual accuracy or inaccuracy. However, further constructive processes in self-motion perception that involve discrete transformations are not well understood. We introduce constructive perception with a linguistic example which displays familiar discrete properties, then look closely at self-motion perception. Examples of self-motion perception begin with cases in which constructive processes transform particular properties of the stimulus. These transformations allow the nervous system to compose whole percepts of movement; that is, self-motion perception acts at a whole-movement level of analysis, rather than passively transducing individual cues. These whole-movement percepts may be paradoxical. In addition, a single stimulus may give rise to multiple perceptions. After reviewing self-motion perception studies, we discuss research methods for delineating principles of the constructed perception of self-motion. The habit of viewing self-motion illusions only as continuous deformations of the stimulus may be blinding the field to other perceptual phenomena, including those best characterized using the mathematics of discrete transformations or mathematical relationships relating sensory modalities in novel, sometimes discrete ways. Analysis of experiments such as these is required to mathematically formalize elements of self-motion perception, the transformations they may undergo, consistency principles, and logical structure underlying multiplicity of perceptions. Such analysis will lead to perceptual rules analogous to those recognized in visual perception.
Collapse
Affiliation(s)
- Jan E. Holly
- Department of Mathematics Colby College 5845 Mayflower Hill Waterville, Maine 04901
| | - Gin McCollum
- Neuro-Otology Department Legacy Research Center 1225 NE 2nd Avenue Portland, Oregon 97232
| |
Collapse
|
50
|
Anderson B. Neglect as a disorder of prior probability. Neuropsychologia 2008; 46:1566-9. [DOI: 10.1016/j.neuropsychologia.2007.12.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2007] [Accepted: 12/04/2007] [Indexed: 11/27/2022]
|