1
|
Kingdom FAA, Touma S, Jennings BJ. Negative afterimages facilitate the detection of real images. Vision Res 2020; 170:25-34. [PMID: 32220671 DOI: 10.1016/j.visres.2020.03.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 03/05/2020] [Accepted: 03/11/2020] [Indexed: 11/18/2022]
Abstract
Negative, or complementary afterimages are experienced following brief adaptation to chromatic or achromatic stimuli, and are believed to be formed in the post-receptoral layers of the retinae. Afterimages can be cancelled by the addition of real images, suggesting that afterimages and real images are processed by similar mechanisms. However given their retinal origin, afterimage signals represented at the cortical level might have different spatio-temporal properties from their real images counterparts. To test this we determined whether afterimages reduce the contrast threshold of added real images, i.e. produce the classic "dipper" function characteristic of contrast discrimination, a behavior believed to be cortically mediated. Stimuli were chromatic and achromatic disks on a grey background. Observers adapted for 1.0 s to two side-by-side disks of a particular color. Following stimulus offset, a test disk added to one side was ramped downwards for 1.5 s to approximately match the temporal characteristic of the afterimage, and the observer was required to indicate the side containing the test disk. The test hue/brightness was either the same as that of the afterimage or a different hue/brightness. The independent variable was the contrast of the adaptor. A dipper followed by masking was observed in most conditions in which the afterimage and test colors had the same hue or brightness. We conclude that afterimages are represented similarly to their real image counterparts at the cortical level.
Collapse
Affiliation(s)
- Frederick A A Kingdom
- McGill Vision Research, Department of Ophthalmology and Vision Sciences, Montreal General Hospital, 1650 Cedar Ave., Rm L11.112, Montreal, Quebec H3G 1A4, Canada
| | - Samir Touma
- McGill Vision Research, Department of Ophthalmology and Vision Sciences, Montreal General Hospital, 1650 Cedar Ave., Rm L11.112, Montreal, Quebec H3G 1A4, Canada
| | - Ben J Jennings
- Centre for Cognitive Neuroscience, College of Health, Medicine and Life Sciences, Brunel University London, UK
| |
Collapse
|
2
|
Discrimination of spatial phase: The roles of luminance distribution and attention. Vision Res 2018; 150:1-7. [PMID: 30003892 DOI: 10.1016/j.visres.2018.06.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 06/02/2018] [Accepted: 06/25/2018] [Indexed: 11/22/2022]
Abstract
We can easily discriminate certain phase relations in spatial patterns but not others. Phase perception has been found different in the fovea vs. periphery, and for single patterns vs. textures. Different numbers of mechanisms have been proposed to account for the regularities of phase perception. In this study, I attempt to better understand the mechanisms behind discrimination of spatial phase. In order to reveal the role of luminance cues, I use histogram matching of patterns with different phases. Possible effects of attention were studied using visual search experiments with varied stimulus set size. Simple and compound Gabor patches, broadband lines and edges, and textures composed of those patterns were used as stimuli. The experiments indicate that phase discrimination is mediated by two mechanisms. The first uses luminance differences and operates pre-attentively, in parallel across the visual field. The second compares relative positions of dark and bright segments within an image, and is strictly limited by capacity of attention.
Collapse
|
3
|
Abstract
A number of experiments have demonstrated that observers can accurately identify stimuli that they fail to detect (Rollman and Nachmias, 1972; Harris and Fahle, 1995; Allik et al. 1982, 2014). Using a 2x2AFC double judgements procedure, we demonstrated an analogous pattern of performance in making judgements about the direction of eye gaze. Participants were shown two faces in succession: one with direct gaze and one with gaze offset to the left or right. We found that they could identify the direction of gaze offset (left/right) better than they could detect which face contained the offset gaze. A simple Thurstonian model, under which the detection judgement is shown to be more computationally complex, was found to explain the empirical data. A further experiment incorporated metacognitive ratings into the double judgements procedure to measure observers' metacognitive awareness (Meta-d') across the two judgements and to assess whether observers were aware of the evidence for offset gaze when detection performance was at and below threshold. Results suggest that metacognitive awareness is tied to performance, with approximately equal Meta-d' across the two judgements, when sensitivity is taken into account. These results show that both performance and metacognitive awareness rely not only on the strength of sensory evidence but also on the computational complexity of the decision, which determines the relative distance of that evidence from the decision axes.
Collapse
|
4
|
Abstract
Crowding--the deleterious influence of clutter on object recognition--disrupts the identification of visual features as diverse as orientation, motion, and color. It is unclear whether this occurs via independent feature-specific crowding processes (preceding the feature binding process) or via a singular (late) mechanism tuned for combined features. To examine the relationship between feature binding and crowding, we measured interactions between the crowding of relative position and orientation. Stimuli were a target cross and two flanker crosses (each composed of two near-orthogonal lines), 15 degrees in the periphery. Observers judged either the orientation (clockwise/counterclockwise) of the near-horizontal target line, its position (up/down relative to the stimulus center), or both. For single-feature judgments, crowding affected position and orientation similarly: thresholds were elevated and responses biased in a manner suggesting that the target appeared more like the flankers. These effects were tuned for orientation, with near-orthogonal elements producing little crowding. This tuning allowed us to separate the predictions of independent (feature specific) and combined (singular) models: for an independent model, reduced crowding for one feature has no effect on crowding for other features, whereas a combined process affects either all features or none. When observers made conjoint judgments, a reduction of orientation crowding (by increasing target-flanker orientation differences) increased the rate of correct responses for both position and orientation, as predicted by our combined model. In contrast, our independent model incorrectly predicted a high rate of position errors, since the probability of positional crowding would be unaffected by changes in orientation. Thus, at least for these features, crowding is a singular process that affects bound position and orientation values in an all-or-none fashion.
Collapse
Affiliation(s)
- John A Greenwood
- UCL Institute of Ophthalmology, University College London, London, UK.
| | | | | |
Collapse
|
5
|
Langley K, Anderson SJ. The Riesz transform and simultaneous representations of phase, energy and orientation in spatial vision. Vision Res 2011; 50:1748-65. [PMID: 20685326 DOI: 10.1016/j.visres.2010.05.031] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2010] [Revised: 05/13/2010] [Accepted: 05/17/2010] [Indexed: 11/25/2022]
Abstract
To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.
Collapse
Affiliation(s)
- Keith Langley
- Cognitive, Perceptual and Brain Sciences, University College London, London, UK.
| | | |
Collapse
|
6
|
Lightness, brightness and transparency: a quarter century of new ideas, captivating demonstrations and unrelenting controversy. Vision Res 2010; 51:652-73. [PMID: 20858514 DOI: 10.1016/j.visres.2010.09.012] [Citation(s) in RCA: 125] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2010] [Revised: 09/03/2010] [Accepted: 09/09/2010] [Indexed: 11/21/2022]
Abstract
The past quarter century has witnessed considerable advances in our understanding of Lightness (perceived reflectance), Brightness (perceived luminance) and perceived Transparency (LBT). This review poses eight major conceptual questions that have engaged researchers during this period, and considers to what extent they have been answered. The questions concern 1. the relationship between lightness, brightness and perceived non-uniform illumination, 2. the brain site for lightness and brightness perception, 3 the effects of context on lightness and brightness, 4. the relationship between brightness and contrast for simple patch-background stimuli, 5. brightness "filling-in", 6. lightness anchoring, 7. the conditions for perceptual transparency, and 8. the perceptual representation of transparency. The discussion of progress on major conceptual questions inevitably requires an evaluation of which approaches to LBT are likely and which are unlikely to bear fruit in the long term, and which issues remain unresolved. It is concluded that the most promising developments in LBT are (a) models of brightness coding based on multi-scale filtering combined with contrast normalization, (b) the idea that the visual system decomposes the image into "layers" of reflectance, illumination and transparency, (c) that an understanding of image statistics is important to an understanding of lightness errors, (d) Whittle's logW metric for contrast-brightness, (e) the idea that "filling-in" is mediated by low spatial frequencies rather than neural spreading, and (f) that there exist multiple cues for identifying non-uniform illumination and transparency. Unresolved issues include how relative lightness values are anchored to produce absolute lightness values, and the perceptual representation of transparency. Bridging the gap between multi-scale filtering and layer decomposition approaches to LBT is a major task for future research.
Collapse
|
7
|
Kelman E, Baddeley R, Shohet A, Osorio D. Perception of visual texture and the expression of disruptive camouflage by the cuttlefish, Sepia officinalis. Proc Biol Sci 2007; 274:1369-75. [PMID: 17389219 PMCID: PMC2176201 DOI: 10.1098/rspb.2007.0240] [Citation(s) in RCA: 62] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Juvenile cuttlefish (Sepia officinalis) camouflage themselves by changing their body pattern according to the background. This behaviour can be used to investigate visual perception in these molluscs and may also give insight into camouflage design. Edge detection is an important aspect of vision, and here we compare the body patterns that cuttlefish produced in response to checkerboard backgrounds with responses to backgrounds that have the same spatial frequency power spectrum as the checkerboards, but randomized spatial phase. For humans, phase randomization removes visual edges. To describe the cuttlefish body patterns, we scored the level of expression of 20 separate pattern 'components', and then derived principal components (PCs) from these scores. After varimax rotation, the first component (PC1) corresponded closely to the so-called disruptive body pattern, and the second (PC2) to the mottle pattern. PC1 was predominantly expressed on checkerboards, and PC2 on phase-randomized backgrounds. Thus, cuttlefish probably have edge detectors that control the expression of disruptive pattern. Although the experiments used unnatural backgrounds, it seems probable that cuttlefish display disruptive camouflage when there are edges in the visual background caused by discrete objects such as pebbles. We discuss the implications of these findings for our understanding of disruptive camouflage.
Collapse
Affiliation(s)
- E.J Kelman
- School of Life Sciences, University of Sussex, FalmerBrighton BN1 9QG, UK
| | - R.J Baddeley
- Department of Experimental Psychology, Social Sciences Complex8 Woodland Road, Clifton, Bristol BS8 1TN, UK
| | - A.J Shohet
- School of Life Sciences, University of Sussex, FalmerBrighton BN1 9QG, UK
| | - D Osorio
- School of Life Sciences, University of Sussex, FalmerBrighton BN1 9QG, UK
- Author for correspondence ()
| |
Collapse
|
8
|
Gheorghiu E, Kingdom FAA. Luminance-contrast properties of contour-shape processing revealed through the shape-frequency after-effect. Vision Res 2006; 46:3603-15. [PMID: 16769101 DOI: 10.1016/j.visres.2006.04.021] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2006] [Revised: 04/16/2006] [Accepted: 04/28/2006] [Indexed: 11/17/2022]
Abstract
We investigated the first-order inputs to contour-shape mechanisms using the shape-frequency after-effect (SFAE), in which adaptation to a sinusoidally modulated contour causes a shift in the apparent shape-frequency of a test contour in a direction away from that of the adapting stimulus [Kingdom F. A. A., & Prins N. (2005a). Different mechanisms encode the shapes of contours and contour-textures. Journal of Vision 5(8), 463, (Abstract)]. We measured SFAEs for adapting and test contours (and edges) that differed in the contrast-polarity, scale (or blur) and magnitude of luminance contrast. The rationale was that if the SFAE was found to be reduced when adaptor and test differed along a particular dimension of luminance contrast, contour-shape mechanisms must be tuned to that dimension. Our results reveal that SFAEs manifest (i) a degree of selectivity to luminance contrast polarity for both even-symmetric (contours only) and odd-symmetric (both contours and edges) luminance profiles; (ii) a degree of selectivity to luminance scale (or blur); (iii) higher selectivity to fine compared to coarse scale for broadband edges (iv) a small preference for equal-in-contrast adaptors and tests. These results suggest that contour shapes are not encoded in the form of a sparse, cartoon-like sketch, as might be presumed by local energy (i.e. non-phase-selective) or form-cue invariant models, but instead in a form that is relatively 'feature-rich.'
Collapse
Affiliation(s)
- Elena Gheorghiu
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Que, Canada.
| | | |
Collapse
|