1
|
Abstract
It is a matter of common sense that a person is easier to recognize when close than when far away. A possible explanation for why this happens begins with two observations. First, the human visual system, like many image-processing devices, can be viewed as a spatial filter that passes higher spatial frequencies, expressed in terms of cycles/degree, progressively more poorly. Second, as a face is moved farther from the observer, the face's image spatial frequency spectrum, expressed in terms of cycles/face, scales downward in a manner inversely proportional to distance. An implication of these two observations is that as a face moves away, progressively lower spatial frequencies, expressed in cycles/face--and therefore, progressively coarser facial details--are lost to the observer at a rate that is likewise inversely proportional to distance. We propose what we call the distance-as-filtering hypothesis, which is that these two observations are sufficient to explain the effect of distance on face processing. If the distance-as-filtering hypothesis is correct, one should be able to simulate the effect of seeing a face at some distance, D, by filtering the face so as to mimic its spatial frequency composition, expressed in terms of cycles/face, at that distance. In four experiments, we measured face perception at varying distances that were simulated either by filtering the face as just described or by shrinking the face so that it subtended the visual angle corresponding to the desired distance. The distance-as-filtering hypothesis was confirmed perfectly in two face perception tasks: assessing the informational content of the face and identifying celebrities. Data from the two tasks could be accounted for by assuming that they were mediated by different low-pass spatial filters within the human visual system that have the same general mathematical description but that differ in scale by a factor of approximately 0.75. We discuss our results in terms of (1) how they can be used to explain the effect of distance on visual processing, (2) what they tell us about face processing, (3) how they are related to "flexible spatial scale usage," as discussed by Schyns and colleagues, and (4) how they may be used in practical (e.g., legal) settings to demonstrate the loss of face information that occurs when a person is seen at a particular distance.
Collapse
|
2
|
Franconeri SL, Scimeca JM, Roth JC, Helseth SA, Kahn LE. Flexible visual processing of spatial relationships. Cognition 2012; 122:210-27. [DOI: 10.1016/j.cognition.2011.11.002] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2010] [Revised: 10/21/2011] [Accepted: 11/02/2011] [Indexed: 11/27/2022]
|
3
|
Simmering VR, Schutte AR, Spencer JP. Generalizing the dynamic field theory of spatial cognition across real and developmental time scales. Brain Res 2008; 1202:68-86. [PMID: 17716632 PMCID: PMC2593104 DOI: 10.1016/j.brainres.2007.06.081] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2007] [Accepted: 06/09/2007] [Indexed: 11/26/2022]
Abstract
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the dynamic field theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks-the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity-generating novel, testable predictions-and generality-spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Collapse
|
4
|
Abstract
Examples of visual motion have become more and more abstract over the years, leading up to ‘third-order’ stimuli where direction is actually determined by the observer through top–down attention. But how far can this be pushed—are there motion stimuli that are yet more arbitrary and abstract? Actually, there is a broad class of ‘conceptual motion’ stimuli—things like a moving grating of faces, or a shifting pattern of words—that are perfect analogs to traditional ‘perceptual motion’ stimuli, solvable by the same motion computation and for which observers can readily make direction-of-motion judgments. Interestingly though, these do not produce a sensation of motion (among other automatic consequences of motion detection). Here we compare a luminance-based perceptual motion stimulus to a semantic-based conceptual motion stimulus to contrast the psychophysical hallmarks of these motion categories.
Collapse
Affiliation(s)
- Erik Blaser
- University of Massachusetts, 100 Morrissey Blvd, Boston, MA02125-3393, USA
| | | |
Collapse
|
5
|
Simmering VR, Spencer JP, Schöner G. Reference-related inhibition produces enhanced position discrimination and fast repulsion near axes of symmetry. ACTA ACUST UNITED AC 2006; 68:1027-46. [PMID: 17153196 DOI: 10.3758/bf03193363] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Models proposed to account for reference frame effects in spatial cognition often account for performance in some tasks well, but fail to generalize to other tasks. Here, we demonstrate that a new process account of spatial working memory--the dynamic field theory (DFT)--can bridge the gap between perceptual and memory processes in position discrimination and spatial recall, highlighting that the processes underlying spatial recall also operate in position discrimination. In six experiments, we tested two novel predictions of the DFT: first, that discrimination is enhanced near symmetry axes, especially when the perceptual salience of the axis is increased; and second, that performance far from a reference axis depends on the direction in which the second stimulus is presented. The DFT also predicts the magnitude of this direction-dependent modulation. These effects arise from reference-related inhibition in the theory. We discuss how the processes captured by the DFT relate to existing psychophysical models and operate across a diverse array of spatial tasks.
Collapse
Affiliation(s)
- Vanessa R Simmering
- Department of Psychology, University of Iowa, El1 Seashore Hall, Iowa City, IA 52242, USA.
| | | | | |
Collapse
|
6
|
Demany L, Ramos C. On the binding of successive sounds: perceiving shifts in nonperceived pitches. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2005; 117:833-841. [PMID: 15759703 DOI: 10.1121/1.1850209] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
It is difficult to hear out individually the components of a "chord" of equal-amplitude pure tones with synchronous onsets and offsets. In the present study, this was confirmed using 300-ms random (inharmonic) chords with components at least 1/2 octave apart. Following each chord, after a variable silent delay, listeners were presented with a single pure tone which was either identical to one component of the chord or halfway in frequency between two components. These two types of sequence could not be reliably discriminated from each other. However, it was also found that if the single tone following the chord was instead slightly (e.g., 1/12 octave) lower or higher in frequency than one of its components, the same listeners were sensitive to this relation. They could perceive a pitch shift in the corresponding direction. Thus, it is possible to perceive a shift in a nonperceived frequency/pitch. This paradoxical phenomenon provides psychophysical evidence for the existence of automatic "frequency-shift detectors" in the human auditory system. The data reported here suggest that such detectors operate at an early stage of auditory scene analysis but can be activated by a pair of sounds separated by a few seconds.
Collapse
Affiliation(s)
- Laurent Demany
- Laboratoire de Neurophysiologie, CNRS and Université Victor Segalen (UMR 5543), F-33076 Bordeaux, France.
| | | |
Collapse
|
7
|
Abstract
We contrast 2 theories within whose context problems are conceptualized and data interpreted. By traditional linear theory, a dependent variable is the sum of main-effect and interaction terms. By dimensional theory, independent variables yield values on internal dimensions that in turn determine performance. We frame our arguments within an investigation of the face-inversion effect--the greater processing disadvantage of inverting faces compared with non-faces. We report data from 3 simulations and 3 experiments wherein faces or non-faces are studied upright or inverted in a recognition procedure. The simulations demonstrate that (a) critical conclusions depend on which theory is used to interpret data and (b) dimensional theory is the more flexible and consistent in identifying underlying psychological structures, because dimensional theory subsumes linear theory as a special case. The experiments demonstrate that by dimensional theory, there is no face-inversion effect for unfamiliar faces but a clear face-inversion effect for celebrity faces.
Collapse
Affiliation(s)
- Geoffrey R Loftus
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA.
| | | | | |
Collapse
|
8
|
Harley EM, Dillon AM, Loftus GR. Why is it difficult to see in the fog? How stimulus contrast affects visual perception and visual memory. Psychon Bull Rev 2004; 11:197-231. [PMID: 15260187 DOI: 10.3758/bf03196564] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Processing visually degraded stimuli is a common experience. We struggle to find house keys on dim front porches, to decipher slides projected in overly bright seminar rooms, and to read 10th-generation photocopies. In this research, we focus specifically on stimuli that are degraded via reduction of stimulus contrast and address two questions. First, why is it difficult to process low-contrast, as compared with high-contrast, stimuli? Second, is the effect of contrast fundamental in that its effect is independent of the stimulus being processed and the reason for processing the stimulus? We formally address and answer these questions within the context of a series of nested theories, each providing a successively stronger definition of what it means for contrast to affect perception and memory. To evaluate the theories, we carried out six experiments. Experiments 1 and 2 involved simple stimuli (randomly generated forms and digit strings), whereas Experiments 3-6 involved naturalistic pictures (faces, houses, and cityscapes). The stimuli were presented at two contrast levels and at varying exposure durations. The data from all the experiments allow the conclusion that some function of stimulus contrast combines multiplicatively with stimulus duration at a stage prior to that at which the nature of the stimulus and the reason for processing it are determined, and it is the result of this multiplicative combination that determines eventual memory performance. We describe a stronger version of this theory--the sensory response, information acquisition theory--which has at its core, the strong Bloch's-law-like assumption of a fundamental visual system response that is proportional to the product of stimulus contrast and stimulus duration. This theory was, as it has been in the past, highly successful in accounting for memory for simple stimuli shown at short (i.e., shorter than an eye fixation) durations. However, it was less successful in accounting for data from short-duration naturalistic pictures and was entirely unsuccessful in accounting for data from naturalistic pictures shown at longer durations. We discuss (1) processing differences between short- and long-duration stimuli, (2) processing differences between simple stimuli, such as digits, and complex stimuli, such as pictures, (3) processing differences between biluminant stimuli (such as line drawings with only two luminance levels) and multiluminant stimuli (such as grayscale pictures with multiple luminance levels), and (4) Bloch's law and a proposed generalization of the concept of metamers.
Collapse
Affiliation(s)
- Erin M Harley
- University of California, Los Angeles, California, USA
| | | | | |
Collapse
|
9
|
Abstract
Five aspects of visual change detection are reviewed. The first concerns the concept of change itself, in particular the ways it differs from the related notions of motion and difference. The second involves the various methodological approaches that have been developed to study change detection; it is shown that under a variety of conditions observers are often unable to see large changes directly in their field of view. Next, it is argued that this "change blindness" indicates that focused attention is needed to detect change, and that this can help map out the nature of visual attention. The fourth aspect concerns how these results affect our understanding of visual perception-for example, the implication that a sparse, dynamic representation underlies much of our visual experience. Finally, a brief discussion is presented concerning the limits to our current understanding of change detection.
Collapse
Affiliation(s)
- Ronald A Rensink
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, V6T 1Z4 Canada.
| |
Collapse
|
10
|
Hock HS, Eastman K, Field L, Stutin C. The effects of common movement and spatial separation on position- and motion-based judgements of relative movement. Vision Res 1992; 32:1043-54. [PMID: 1509695 DOI: 10.1016/0042-6989(92)90005-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
When common movement is superimposed on relative movement (changes in separation between two dots), relative movement thresholds increase nonlinearly as a function of initial dot separation. For large separation (greater than 2.0 deg), thresholds increase gradually with increased separation. It is shown that this reflects judgments based on perceived relative motion. For small separations (less than 2.0 deg), thresholds increase sharply with increased separation. It is shown that this reflects judgments based on perceived changes in relative position. Evidence is presented that superimposed common movement reduces sensitivity to relative movement by reducing sensitivity to relative motion. This provides a "window", in the range of small dot separations, for relative movement judgements to be based on the perception of changes in relative position, even though motion is perceived for individual dots.
Collapse
Affiliation(s)
- H S Hock
- Department of Psychology, Florida Atlantic University, Boca Raton 33431
| | | | | | | |
Collapse
|
11
|
Abstract
A nearby visual reference point facilitates displacement discrimination. For example, a nearby stationary point of light improves discriminating left or right displacement of a point by several fold. This reference effect interacts with the temporal characteristics of displacement. Discrimination thresholds rise with increases in the delay between the offset of the initial stimulus and the onset of the displaced stimulus. Moreover, the effect of a nearby reference point is larger with a long delay than with a short delay. This interaction was investigated using two nonexclusive hypotheses of the mediating mechanisms. A single-signal hypothesis specifies a single mediating mechanism that aggregates the effect of displacement and delay. A single-noise hypothesis specifies a single mediating mechanism that aggregates the effect of reference and delay. Each of these hypotheses predicts an equivalence property similar to the equivalent background principle of adaptation research. The equivalence required by the single-signal hypothesis was not satisfied, disconfirming the hypothesis. In contrast, the equivalence required by the single-noise hypothesis was satisfied. The equivalence was extended to delays where the reported judgment was of perceived position and not of perceived movement. The second result is compatible with displacement discrimination mechanisms that are the same function of displacement, whatever their other differences.
Collapse
|