1
|
Morimoto T, Akbarinia A, Storrs K, Cheeseman JR, Smithson HE, Gegenfurtner KR, Fleming RW. Color and gloss constancy under diverse lighting environments. J Vis 2023; 23:8. [PMID: 37432844 PMCID: PMC10351023 DOI: 10.1167/jov.23.7.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023] Open
Abstract
When we look at an object, we simultaneously see how glossy or matte it is, how light or dark, and what color. Yet, at each point on the object's surface, both diffuse and specular reflections are mixed in different proportions, resulting in substantial spatial chromatic and luminance variations. To further complicate matters, this pattern changes radically when the object is viewed under different lighting conditions. The purpose of this study was to simultaneously measure our ability to judge color and gloss using an image set capturing diverse object and illuminant properties. Participants adjusted the hue, lightness, chroma, and specular reflectance of a reference object so that it appeared to be made of the same material as a test object. Critically, the two objects were presented under different lighting environments. We found that hue matches were highly accurate, except for under a chromatically atypical illuminant. Chroma and lightness constancy were generally poor, but these failures correlated well with simple image statistics. Gloss constancy was particularly poor, and these failures were only partially explained by reflection contrast. Importantly, across all measures, participants were highly consistent with one another in their deviations from constancy. Although color and gloss constancy hold well in simple conditions, the variety of lighting and shape in the real world presents significant challenges to our visual system's ability to judge intrinsic material properties.
Collapse
Affiliation(s)
- Takuma Morimoto
- Justus Liebig University Giessen, Giessen, Germany
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Katherine Storrs
- Justus Liebig University Giessen, Giessen, Germany
- School of Psychology, University of Auckland, New Zealand
| | - Jacob R Cheeseman
- Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen and Darmstadt, Germany
| | - Hannah E Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Roland W Fleming
- Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen and Darmstadt, Germany
| |
Collapse
|
2
|
Hansmann-Roth S, Chetverikov A, Kristjánsson Á. Extracting statistical information about shapes in the visual environment. Vision Res 2023; 206:108190. [PMID: 36780808 DOI: 10.1016/j.visres.2023.108190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 02/01/2023] [Accepted: 02/01/2023] [Indexed: 02/13/2023]
Abstract
It is well known that observers can use so-called summary statistics of visual ensembles to simplify perceptual processing. The assumption has been that instead of representing feature distributions in detail the visual system extracts the mean and variance of visual ensembles. But recent evidence from implicit testing using a method called feature distribution learning showed that far more detail of the distributions is retained than the summary statistic literature indicates. Observers also encode higher-order statistics such as the kurtosis of feature distributions of orientation and color. But this sort of learning has not been shown for more intricate aspects of visual information. Here we tested the learning of distractor ensembles for shape, using the feature distribution learning method. Using a linearized circular shape space, we found that learning of detailed distributions of shape does not occur for this shape space while observers were able to learn the mean and range of the distributions. Previous demonstrations of feature distribution learning involved simpler feature dimensions than the more complex shape space tested here, and our findings may therefore reveal important boundary conditions of feature distribution learning.
Collapse
Affiliation(s)
- Sabrina Hansmann-Roth
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
| | - Andrey Chetverikov
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Árni Kristjánsson
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| |
Collapse
|
3
|
Li MS, Abbatecola C, Petro LS, Muckli L. Numerosity Perception in Peripheral Vision. Front Hum Neurosci 2021; 15:750417. [PMID: 34803635 PMCID: PMC8597708 DOI: 10.3389/fnhum.2021.750417] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/14/2021] [Indexed: 11/13/2022] Open
Abstract
Peripheral vision has different functional priorities for mammals than foveal vision. One of its roles is to monitor the environment while central vision is focused on the current task. Becoming distracted too easily would be counterproductive in this perspective, so the brain should react to behaviourally relevant changes. Gist processing is good for this purpose, and it is therefore not surprising that evidence from both functional brain imaging and behavioural research suggests a tendency to generalize and blend information in the periphery. This may be caused by the balance of perceptual influence in the periphery between bottom-up (i.e., sensory information) and top-down (i.e., prior or contextual information) processing channels. Here, we investigated this interaction behaviourally using a peripheral numerosity discrimination task with top-down and bottom-up manipulations. Participants compared numerosity between the left and right peripheries of a screen. Each periphery was divided into a centre and a surrounding area, only one of which was a task relevant target region. Our top-down task modulation was the instruction which area to attend - centre or surround. We varied the signal strength by altering the stimuli durations i.e., the amount of information presented/processed (as a combined bottom-up and recurrent top-down feedback factor). We found that numerosity perceived in target regions was affected by contextual information in neighbouring (but irrelevant) areas. This effect appeared as soon as stimulus duration allowed the task to be reliably performed and persisted even at the longest duration (1 s). We compared the pattern of results with an ideal-observer model and found a qualitative difference in the way centre and surround areas interacted perceptually in the periphery. When participants reported on the central area, the irrelevant surround would affect the response as a weighted combination - consistent with the idea of a receptive field focused in the target area to which irrelevant surround stimulation leaks in. When participants report on surround, we can best describe the response with a model in which occasionally the attention switches from task relevant surround to task irrelevant centre - consistent with a selection model of two competing streams of information. Overall our results show that the influence of spatial context in the periphery is mandatory but task dependent.
Collapse
Affiliation(s)
- Min Susan Li
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| | - Clement Abbatecola
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
4
|
Sun HC, St-Amand D, Baker CL, Kingdom FAA. Visual perception of texture regularity: Conjoint measurements and a wavelet response-distribution model. PLoS Comput Biol 2021; 17:e1008802. [PMID: 34653176 PMCID: PMC8550603 DOI: 10.1371/journal.pcbi.1008802] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 10/27/2021] [Accepted: 09/24/2021] [Indexed: 11/21/2022] Open
Abstract
Texture regularity, such as the repeating pattern in a carpet, brickwork or tree bark, is a ubiquitous feature of the visual world. The perception of regularity has generally been studied using multi-element textures in which the degree of regularity has been manipulated by adding random jitter to the elements’ positions. Here we used three-factor Maximum Likelihood Conjoint Measurement (MLCM) for the first time to investigate the encoding of regularity information under more complex conditions in which element spacing and size, in addition to positional jitter, were manipulated. Human observers were presented with large numbers of pairs of multi-element stimuli with varying levels of the three factors, and indicated on each trial which stimulus appeared more regular. All three factors contributed to regularity perception. Jitter, as expected, strongly affected regularity perception. This effect of jitter on regularity perception is strongest at small element spacing and large texture element size, suggesting that the visual system utilizes the edge-to-edge distance between elements as the basis for regularity judgments. We then examined how the responses of a bank of Gabor wavelet spatial filters might account for our results. Our analysis indicates that the peakedness of the spatial frequency (SF) distribution, a previously favored proposal, is insufficient for regularity encoding since it varied more with element spacing and size than with jitter. Instead, our results support the idea that the visual system may extract texture regularity information from the moments of the SF-distribution across orientation. In our best-performing model, the variance of SF-distribution skew across orientations can explain 70% of the variance of estimated texture regularity from our data, suggesting that it could provide a candidate read-out for perceived regularity. We investigated human perception of texture regularity, in which subjects made comparative judgements of regularity in pairs of texture stimuli with differing levels of three parameters of texture construction—spacing and size of texture elements, and their positional jitter. We analyzed the data using a novel approach involving three-factor Maximum Likelihood Conjoint Measurement (MLCM). We utilized a novel three-way approach in MLCM to evaluate the effect size and significance of the three factors as well as their interactions. We found that all three factors contributed to perceived regularity, with significant main effects and interactions between factors, in a manner suggesting edge-to-edge distances between elements might contribute importantly to regularity judgments. Using a bank of Gabor wavelet spatial filters to model the response of the human visual system to our textures, we compared four types of ways that the distribution of wavelet responses could account for our measured data on perceived regularity. Our results suggest that the orientation as well as spatial frequency (SF) information from the wavelet filters also contributes importantly—in particular, the skew of the variance of the SF-distribution across orientation provides a candidate basis for perceived texture regularity.
Collapse
Affiliation(s)
- Hua-Chun Sun
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
- School of Psychology, UNSW Sydney, Australia
| | - David St-Amand
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| | - Curtis L. Baker
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
- * E-mail:
| | | |
Collapse
|
5
|
Harvey JS, Smithson HE. Low level visual features support robust material perception in the judgement of metallicity. Sci Rep 2021; 11:16396. [PMID: 34385496 PMCID: PMC8361131 DOI: 10.1038/s41598-021-95416-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/12/2021] [Indexed: 11/19/2022] Open
Abstract
The human visual system is able to rapidly and accurately infer the material properties of objects and surfaces in the world. Yet an inverse optics approach—estimating the bi-directional reflectance distribution function of a surface, given its geometry and environment, and relating this to the optical properties of materials—is both intractable and computationally unaffordable. Rather, previous studies have found that the visual system may exploit low-level spatio-chromatic statistics as heuristics for material judgment. Here, we present results from psychophysics and modeling that supports the use of image statistics heuristics in the judgement of metallicity—the quality of appearance that suggests an object is made from metal. Using computer graphics, we generated stimuli that varied along two physical dimensions: the smoothness of a metal object, and the evenness of its transparent coating. This allowed for the exploration of low-level image statistics, whilst ensuring that each stimulus was a naturalistic, physically plausible image. A conjoint-measurement task decoupled the contributions of these dimensions to the perception of metallicity. Low-level image features, as represented in the activations of oriented linear filters at different spatial scales, were found to correlate with the dimensions of the stimulus space, and decision-making models using these activations replicated observer performance in perceiving differences in metal smoothness and coating bumpiness, and judging metallicity. Importantly, the performance of these models did not deteriorate when objects were rotated within their simulated scene, with corresponding changes in image properties. We therefore conclude that low-level image features may provide reliable cues for the robust perception of metallicity.
Collapse
Affiliation(s)
- Joshua S Harvey
- Neuroscience Institute, NYU Langone Health, New York, NY, 10016, USA. .,Department of Engineering Science, Oxford University, Oxford, OX1 3PJ, UK. .,Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK
| |
Collapse
|
6
|
Abstract
In studying visual perception, we seek to develop models of processing that accurately predict perceptual judgments. Much of this work is focused on judgments of discrimination, and there is a large literature concerning models of visual discrimination. There are, however, non-threshold visual judgments, such as judgments of the magnitude of differences between visual stimuli, that provide a means to bridge the gap between threshold and appearance. We describe two such models of suprathreshold judgments, maximum likelihood difference scaling and maximum likelihood conjoint measurement, and review recent literature that has exploited them.
Collapse
Affiliation(s)
- Laurence T Maloney
- Department of Psychology, New York University, New York, New York 10003, USA;
| | - Kenneth Knoblauch
- Université Lyon, Université Claude Bernard Lyon 1, INSERM, Stem Cell and Brain Research Institute U1208, 69500 Bron, France; .,National Centre for Optics, Vision and Eye Care, Faculty of Health and Social Sciences, University of South-Eastern Norway, 3616 Kongsberg, Norway
| |
Collapse
|
7
|
Abstract
A central question in psychophysical research is how perceptual differences between stimuli translate into physical differences and vice versa. Characterizing such a psychophysical scale would reveal how a stimulus is converted into a perceptual event, particularly under changes in viewing conditions (e.g., illumination). Various methods exist to derive perceptual scales, but in practice, scale estimation is often bypassed by assessing appearance matches. Matches, however, only reflect the underlying perceptual scales but do not reveal them directly. Two recently developed methods, MLDS (Maximum Likelihood Difference Scaling) and MLCM (Maximum Likelihood Conjoint Measurement), promise to reliably estimate perceptual scales. Here we compared both methods in their ability to estimate perceptual scales across context changes in the domain of lightness perception. In simulations, we adopted a lightness constant, a contrast, and a luminance-based observer model to generate differential patterns of perceptual scales. MLCM correctly recovered all models. MLDS correctly recovered only the lightness constant observer model. We also empirically probed both methods with two types of stimuli: (a) variegated checkerboards that support lightness constancy and (b) center-surround stimuli that do not support lightness constancy. Consistent with the simulations, MLDS and MLCM provided similar scale estimates in the first case and divergent estimates in the second. In addition, scales from MLCM–and not from MLDS–accurately predicted asymmetric matches for both types of stimuli. Taking experimental and simulation results together, MLCM seems more apt to provide a valid estimate of the perceptual scales underlying judgments of lightness across viewing conditions.
Collapse
|
8
|
Wendt G, Faul F. Factors Influencing the Detection of Spatially-Varying Surface Gloss. Iperception 2019; 10:2041669519866843. [PMID: 31523415 PMCID: PMC6732868 DOI: 10.1177/2041669519866843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/10/2019] [Indexed: 11/15/2022] Open
Abstract
In this study, we investigate the ability of human observers to detect spatial inhomogeneities in the glossiness of a surface and how the performance in this task depends on several context factors. We used computer-generated stimuli showing a single object in three-dimensional space whose surface was split into two spatial areas with different microscale smoothness. The context factors were the kind of illumination, the object's shape, the availability of motion information, the degree of edge blurring, the spatial proportions between the two areas of different smoothness, and the general smoothness level. Detection thresholds were determined using a two-alternative forced choice (2AFC) task implemented in a double random staircase procedure, where the subjects had to indicate for each stimulus whether or not the surface appears to have a spatially uniform material. We found evidence that two different cues are used for this task: luminance differences and differences in highlight properties between areas of different microscale smoothness. While the visual system seems to be highly sensitive in detecting gloss differences based on luminance contrast information, detection thresholds were considerably higher when the judgment was mainly based on differences in highlight features, such as their size, intensity, and sharpness.
Collapse
Affiliation(s)
- Gunnar Wendt
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| | - Franz Faul
- Christian-Albrechts-Universität zu Kiel, Institut
für Psychologie, Kiel, Germany
| |
Collapse
|
9
|
Chadwick AC, Cox G, Smithson HE, Kentridge RW. Beyond scattering and absorption: Perceptual unmixing of translucent liquids. J Vis 2019; 18:18. [PMID: 30372728 PMCID: PMC6205562 DOI: 10.1167/18.11.18] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Is perception of translucence based on estimations of scattering and absorption of light or on statistical pseudocues associated with familiar materials? We compared perceptual performance with real and computer-generated stimuli. Real stimuli were glasses of milky tea. Milk predominantly scatters light and tea absorbs it, but since the tea absorbs less as the milk concentration increases, the effects of milkiness and strength on scattering and absorption are not independent. Conversely, computer-generated stimuli were glasses of “milky tea” in which absorption and scattering were independently manipulated. Observers judged tea concentrations regardless of milk concentrations, or vice versa. Maximum-likelihood conjoint measurement was used to estimate the contributions of each physical component—concentrations of milk and tea, or amounts of scattering and absorption—to perceived milkiness or tea strength. Separability of the two physical dimensions was better for real than for computer-generated teas, suggesting that interactions between scattering and absorption were correctly accounted for in perceptual unmixing, but unmixing was always imperfect. Since the real and rendered stimuli represent different physical processes and therefore differ in their image statistics, perceptual judgments with these stimuli allowed us to identify particular pseudocues (presumably learned with real stimuli) that explain judgments with both stimulus sets.
Collapse
Affiliation(s)
- Alice C Chadwick
- Department of Psychology, University of Durham, Durham University Science Site, Durham, UK
| | - George Cox
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Hannah E Smithson
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Robert W Kentridge
- Department of Psychology, University of Durham, Durham University Science Site, Durham, UK
| |
Collapse
|
10
|
Radonjić A, Cottaris NP, Brainard DH. The relative contribution of color and material in object selection. PLoS Comput Biol 2019; 15:e1006950. [PMID: 30978187 PMCID: PMC6490924 DOI: 10.1371/journal.pcbi.1006950] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 04/30/2019] [Accepted: 03/10/2019] [Indexed: 01/19/2023] Open
Abstract
Object perception is inherently multidimensional: information about color, material, texture and shape all guide how we interact with objects. We developed a paradigm that quantifies how two object properties (color and material) combine in object selection. On each experimental trial, observers viewed three blob-shaped objects—the target and two tests—and selected the test that was more similar to the target. Across trials, the target object was fixed, while the tests varied in color (across 7 levels) and material (also 7 levels, yielding 49 possible stimuli). We used an adaptive trial selection procedure (Quest+) to present, on each trial, the stimulus test pair that is most informative of underlying processes that drive selection. We present a novel computational model that allows us to describe observers’ selection data in terms of (1) the underlying perceptual stimulus representation and (2) a color-material weight, which quantifies the relative importance of color vs. material in selection. We document large individual differences in the color-material weight across the 12 observers we tested. Furthermore, our analyses reveal limits on how precisely selection data simultaneously constrain perceptual representations and the color-material weight. These limits should guide future efforts towards understanding the multidimensional nature of object perception. Much is known about how the visual system extracts information about individual object properties, such as color or material. Considerably less is known about how percepts of these properties interact to form a multidimensional object representation. We report the first quantitative analysis of how perceived color and material combine in object selection, using a task designed to reflect key aspects of how we use vision in real life. We introduce a computational model that describes observers’ selection behavior in terms of (1) how objects are represented in an underlying subjective perceptual color-material space and (2) how differences in perceived object color and material combine to guide selection. We find large individual differences in the degree to which observers select objects based on color relative to material: some base their selections almost entirely on color, some weight color and material nearly equally, and others rely almost entirely on material. A fine-grained analysis clarifies the limits on how precisely selection data may be leveraged to simultaneously understand the underlying perceptual representations on one hand and how the information about perceived color and material combine on the other. Our work provides a foundation for improving our understanding of visual computations in natural viewing.
Collapse
Affiliation(s)
- Ana Radonjić
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- * E-mail:
| | - Nicolas P. Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
11
|
Adams WJ, Kucukoglu G, Landy MS, Mantiuk RK. Naturally glossy: Gloss perception, illumination statistics, and tone mapping. J Vis 2019; 18:4. [PMID: 30508429 PMCID: PMC6279370 DOI: 10.1167/18.13.4] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Recognizing materials and understanding their properties is very useful—perhaps critical—in daily life as we encounter objects and plan our interactions with them. Visually derived estimates of material properties guide where and with what force we grasp an object. However, the estimation of material properties, such as glossiness, is a classic ill-posed problem. Image cues that we rely on to estimate gloss are also affected by shape, illumination and, in visual displays, tone-mapping. Here, we focus on the latter two. We define some commonalities present in the structure of natural illumination, and determine whether manipulation of these natural “signatures” impedes gloss constancy. We manipulate the illumination field to violate statistical regularities of natural illumination, such that light comes from below, or the luminance distribution is no longer skewed. These manipulations result in errors in perceived gloss. Similarly, tone mapping has a dramatic effect on perceived gloss. However, when objects are viewed against an informative (rather than plain gray) background that reflects these manipulations, there are some improvements to gloss constancy: in particular, observers are far less susceptible to the effects of tone mapping when judging gloss. We suggest that observers are sensitive to some very simple statistics of the environment when judging gloss.
Collapse
Affiliation(s)
- Wendy J Adams
- Department of Psychology, University of Southampton, Southampton, UK
| | - Gizem Kucukoglu
- Department of Psychology, New York University, New York, NY, USA
| | - Michael S Landy
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA
| | - Rafal K Mantiuk
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| |
Collapse
|
12
|
Tsuda H, Saiki J. Constancy of visual working memory of glossiness under real-world illuminations. J Vis 2018; 18:14. [PMID: 30167672 DOI: 10.1167/18.8.14] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Glossiness is a surface property of material that is useful for recognizing objects and spaces. For glossiness to be effective across situations, our visual system must be unaffected by viewing contexts, such as lighting conditions. Although glossiness perception has constancy across changes in illumination, whether visual working memory also realizes glossiness constancy is not known. To address this issue, participants were presented with photo-realistic computer-generated images of spherical objects and asked to match the appearance of reference and test stimuli in relation to two dimensions of glossiness (contrast and sharpness). By comparing performance in terms of the match between perception and memory, we found that both features were well recalled, even when illumination contexts differed between the study and test periods. In addition, no correlation was found between recall errors related to contrast and sharpness, suggesting that these features are independently represented, not only in perception, as previously reported, but also in working memory. Taken together, these findings demonstrate the constancy of glossiness in visual working memory under conditions of real-world illumination.
Collapse
Affiliation(s)
- Hiroyuki Tsuda
- Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan
| | - Jun Saiki
- Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan
| |
Collapse
|
13
|
Brainard DH, Cottaris NP, Radonjić A. The perception of colour and material in naturalistic tasks. Interface Focus 2018; 8:20180012. [PMID: 29951192 DOI: 10.1098/rsfs.2018.0012] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2018] [Indexed: 12/12/2022] Open
Abstract
Perceived object colour and material help us to select and interact with objects. Because there is no simple mapping between the pattern of an object's image on the retina and its physical reflectance, our perceptions of colour and material are the result of sophisticated visual computations. A long-standing goal in vision science is to describe how these computations work, particularly as they act to stabilize perceived colour and material against variation in scene factors extrinsic to object surface properties, such as the illumination. If we take seriously the notion that perceived colour and material are useful because they help guide behaviour in natural tasks, then we need experiments that measure and models that describe how they are used in such tasks. To this end, we have developed selection-based methods and accompanying perceptual models for studying perceived object colour and material. This focused review highlights key aspects of our work. It includes a discussion of future directions and challenges, as well as an outline of a computational observer model that incorporates early, known, stages of visual processing and that clarifies how early vision shapes selection performance.
Collapse
Affiliation(s)
- David H Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Nicolas P Cottaris
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ana Radonjić
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|