1
|
Baxter BA, Warren WH. A day at the beach: Does visually perceived distance depend on the energetic cost of walking? J Vis 2021; 21:13. [PMID: 34812836 PMCID: PMC8626849 DOI: 10.1167/jov.21.12.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It takes less effort to walk from here to the Tiki Hut on the brick walkway than on the sandy beach. Does that influence how far away the Tiki Hut looks? The energetic cost of walking on dry sand is twice that of walking on firm ground (Lejeune et al., 1998). If perceived distance depends on the energetic cost or anticipated effort of walking (Proffitt, 2006), then the distance of a target viewed over sand should appear much greater than one viewed over brick. If perceived distance is specified by optical information (e.g., declination angle from the horizon; Ooi et al., 2001), then the distances should appear similar. Participants (N = 13) viewed a target at a distance of 5, 7, 9, or 11 m over sand or brick and then blind-walked an equivalent distance on the same or different terrain. First, we observed no main effect of walked terrain; walked distances on sand and brick were the same (p = 0.46), indicating that locomotion was calibrated to each substrate. Second, responses were actually greater after viewing over brick than over sand (p < 0.001), opposite to the prediction of the energetic hypothesis. This unexpected overshooting can be explained by the slight incline of the brick walkway, which partially raises the visually perceived eye level (VPEL) and increases the target distance specified by the declination angle. The result is thus consistent with the information hypothesis. We conclude that visually perceived egocentric distance depends on optical information and not on the anticipated energetic cost of walking.
Collapse
Affiliation(s)
- Brittany A Baxter
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.,
| | - William H Warren
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.,
| |
Collapse
|
2
|
The foggy effect of egocentric distance in a nonverbal paradigm. Sci Rep 2021; 11:14398. [PMID: 34257323 PMCID: PMC8277830 DOI: 10.1038/s41598-021-93380-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 06/23/2021] [Indexed: 02/06/2023] Open
Abstract
Inaccurate egocentric distance and speed perception are two main explanations for the high accident rate associated with driving in foggy weather. The effect of foggy weather on speed has been well studied. However, its effect on egocentric distance perception is poorly understood. The paradigm for measuring perceived egocentric distance in previous studies was verbal estimation instead of a nonverbal paradigm. In the current research, a nonverbal paradigm, the visual matching task, was used. Our results from the nonverbal task revealed a robust foggy effect on egocentric distance. Observers overestimated the egocentric distance in foggy weather compared to in clear weather. The higher the concentration of fog, the more serious the overestimation. This effect of fog on egocentric distance was not limited to a certain distance range but was maintained in action space and vista space. Our findings confirm the foggy effect with a nonverbal paradigm and reveal that people may perceive egocentric distance more "accurately" in foggy weather than when it is measured with a verbal estimation task.
Collapse
|
3
|
DeFINE: Delayed feedback-based immersive navigation environment for studying goal-directed human navigation. Behav Res Methods 2021; 53:2668-2688. [PMID: 34027593 DOI: 10.3758/s13428-021-01586-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2021] [Indexed: 11/08/2022]
Abstract
With the advent of consumer-grade products for presenting an immersive virtual environment (VE), there is a growing interest in utilizing VEs for testing human navigation behavior. However, preparing a VE still requires a high level of technical expertise in computer graphics and virtual reality, posing a significant hurdle to embracing the emerging technology. To address this issue, this paper presents Delayed Feedback-based Immersive Navigation Environment (DeFINE), a framework that allows for easy creation and administration of navigation tasks within customizable VEs via intuitive graphical user interfaces and simple settings files. Importantly, DeFINE has a built-in capability to provide performance feedback to participants during an experiment, a feature that is critically missing in other similar frameworks. To show the usability of DeFINE from both experimentalists' and participants' perspectives, a demonstration was made in which participants navigated to a hidden goal location with feedback that differentially weighted speed and accuracy of their responses. In addition, the participants evaluated DeFINE in terms of its ease of use, required workload, and proneness to induce cybersickness. The demonstration exemplified typical experimental manipulations DeFINE accommodates and what types of data it can collect for characterizing participants' task performance. With its out-of-the-box functionality and potential customizability due to open-source licensing, DeFINE makes VEs more accessible to many researchers.
Collapse
|
4
|
Foley JM. Visually directed action. J Vis 2021; 21:25. [PMID: 34019620 PMCID: PMC8142698 DOI: 10.1167/jov.21.5.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When people throw or walk to targets in front of them without visual feedback, they often respond short. With feedback, responses rapidly become approximately accurate. To understand this, an experiment is performed with four stages. 1) The errors in blind walking and blind throwing are measured in a virtual environment in light and dark cue conditions. 2) Error feedback is introduced and the resulting learning measured. 3) Transfer to the other response is then measured. 4) Finally, responses to the perceived distances of the targets are measured. There is large initial under-responding. Feedback rapidly makes responses almost accurate. Throw training transfers completely to walking. Walk training produces a small effect on throwing. Under instructions to respond to perceived distances, under-responding recurs. The phenomena are well described by a model in which the relation between target distance and response distance is determined by a sequence of a perceptual, a cognitive, and a motor transform. Walk learning is primarily motor; throw learning is cognitive.
Collapse
Affiliation(s)
- John M Foley
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA.,
| |
Collapse
|
5
|
Zhang J, Yang X, Jin Z, Li L. Distance Estimation in Virtual Reality Is Affected by Both the Virtual and the Real-World Environments. Iperception 2021; 12:20416695211023956. [PMID: 34211686 PMCID: PMC8216372 DOI: 10.1177/20416695211023956] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 05/19/2021] [Indexed: 11/17/2022] Open
Abstract
The experience in virtual reality (VR) is unique, in that observers are in a real-world location while browsing through a virtual scene. Previous studies have investigated the effect of the virtual environment on distance estimation. However, it is unclear how the real-world environment influences distance estimation in VR. Here, we measured the distance estimation using a bisection (Experiment 1) and a blind-walking (Experiments 2 and 3) method. Participants performed distance judgments in VR, which rendered either virtual indoor or outdoor scenes. Experiments were also carried out in either real-world indoor or outdoor locations. In the bisection experiment, judged distance in virtual outdoor was greater than that in virtual indoor. However, the real-world environment had no impact on distance judgment estimated by bisection. In the blind-walking experiment, judged distance in real-world outdoor was greater than that in real-world indoor. On the other hand, the virtual environment had no impact on distance judgment estimated by blind-walking. Generally, our results suggest that both the virtual and real-world environments have an impact on distance judgment in VR. Especially, the real-world environment where a person is physically located during a VR experience influences the person's distance estimation in VR.
Collapse
Affiliation(s)
- Junjun Zhang
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoyan Yang
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhenlan Jin
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Ling Li
- MOE Key Lab for Neuroinformation, The Clinical Hospital of Chengdu Brain Science Institute, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
6
|
Using virtual reality to assess dynamic self-motion and landmark cues for spatial updating in children and adults. Mem Cognit 2020; 49:572-585. [PMID: 33108632 DOI: 10.3758/s13421-020-01111-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 11/08/2022]
Abstract
The relative contribution of different sources of information for spatial updating - keeping track of one's position in an environment - has been highly debated. Further, children and adults may differ in their reliance on visual versus body-based information for spatial updating. In two experiments, we tested children (age 10-12 years) and young adult participants on a virtual point-to-origin task that varied the types of self-motion information available for translation: full-dynamic (walking), visual-dynamic (controller induced), and no-dynamic (teleporting). In Experiment 1, participants completed the three conditions in an indoor virtual environment with visual landmark cues. Adults were more accurate in the full- and visual-dynamic conditions (which did not differ from each other) compared to the no-dynamic condition. In contrast, children were most accurate in the visual-dynamic condition and also least accurate in the no-dynamic condition. Adults outperformed children in all conditions. In Experiment 2, we removed the potential for relying on visual landmarks by running the same paradigm in an outdoor virtual environment with no geometrical room cues. As expected, adults' errors increased in all conditions, but performance was still relatively worse in teleporting. Surprisingly, children showed overall similar accuracy and patterns across locomotion conditions to adults. Together, the results support the importance of dynamic translation information (either visual or body-based) for spatial updating across both age groups, but suggest children may be more reliant on visual information than adults.
Collapse
|
7
|
The role of top-down knowledge about environmental context in egocentric distance judgments. Atten Percept Psychophys 2019; 80:586-599. [PMID: 29204865 DOI: 10.3758/s13414-017-1461-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Judgments of egocentric distances in well-lit natural environments can differ substantially in indoor versus outdoor contexts. Visual cues (e.g., linear perspective, texture gradients) no doubt play a strong role in context-dependent judgments when cues are abundant. Here we investigated a possible top-down influence on distance judgments that might play a unique role under conditions of perceptual uncertainty: assumptions or knowledge that one is indoors or outdoors. We presented targets in a large outdoor field and in an indoor classroom. To control visual distance and depth cues between the environments, we restricted the field of view by using a 14-deg aperture. Evidence of context effects depended on the response mode: Blindfolded-walking responses were systematically shorter indoors than outdoors, whereas verbal and size gesture judgments showed no context effects. These results suggest that top-down knowledge about the environmental context does not strongly influence visually perceived egocentric distance. However, this knowledge can operate as an output-level bias, such that blindfolded-walking responses are shorter when observers' top-down knowledge indicates that they are indoors and when the size of the room is uncertain.
Collapse
|
8
|
Erkelens CJ. Multiple Photographs of a Perspective Scene Reveal the Principles of Picture Perception. Vision (Basel) 2018; 2:vision2030026. [PMID: 31735889 PMCID: PMC6835796 DOI: 10.3390/vision2030026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Revised: 06/23/2018] [Accepted: 06/24/2018] [Indexed: 11/16/2022] Open
Abstract
A picture is a powerful and convenient medium for inducing the illusion that one perceives a three-dimensional scene. The relative invariance of picture perception across viewing positions has aroused the interest of painters, photographers, and visual scientists. This study explores variables that may underlie the invariance. It presents a computational analysis of distances and directions in sets of two photographs of perspective scenes taken from different camera positions. Focal lengths of the lens and picture sizes are chosen such that the sizes of one of the familiar objects are equally large in both photographs. The selected object is perceived at the same distance in both photographs, independent of viewing distance, showing that pictorial distance is fully determined by angular size of the object. Pictorial distance is independent of camera position, focal length of the lens, and picture size. Distances and directions of pictorial objects are computed as a function of viewing distance, and compared with distances and directions of the physical objects as a function of camera position. The computations show that ratios between pictorial distances, directions, and angular sizes of objects in a photograph are constant, as a function of viewing distance. The constant ratios are proposed as the reason for invariance of picture perception over a range of viewing distances. Reanalysis of distance judgments obtained from the literature shows that perspective space, previously proposed as the model for visual space, is also a good model for pictorial space. The geometry of pictorial space contradicts some conceptions about picture perception.
Collapse
Affiliation(s)
- Casper J Erkelens
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| |
Collapse
|
9
|
McCann BC, Hayhoe MM, Geisler WS. Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 2018; 18:12. [PMID: 29710302 PMCID: PMC5901372 DOI: 10.1167/18.4.12] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 02/19/2018] [Indexed: 01/28/2023] Open
Abstract
Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°). Measurements were made for binocular and monocular viewing. Thresholds for binocular viewing were quite small at all distances (Weber fractions less than 1% at 2° spacing and less than 4% at 10° spacing). Thresholds for monocular viewing were higher than those for binocular viewing out to distances of 15-20 m, beyond which they were the same. Using standard cue-combination analysis, we also estimated what the thresholds would be based on binocular-stereo cues alone. With two exceptions, we show that the entire pattern of results is consistent with what one would expect from classical studies of binocular disparity thresholds and separation/size discrimination thresholds measured with simple laboratory stimuli. The first exception is some deviation from the expected pattern at close distances (especially for monocular viewing). The second exception is that thresholds in natural scenes are lower, presumably because of the rich figural cues contained in natural images.
Collapse
Affiliation(s)
- Brian C McCann
- Texas Advanced Computing Center, Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Wilson S Geisler
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
10
|
Hecht H, Ramdohr M, von Castell C. Underestimation of large distances in active and passive locomotion. Exp Brain Res 2018; 236:1603-1609. [PMID: 29582108 DOI: 10.1007/s00221-018-5245-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 03/22/2018] [Indexed: 11/29/2022]
Abstract
Our ability to estimate distances, be it verbally or by locomotion, is exquisite at close range (action space). At distances above 100 m (vista space), verbal estimates continue to be quite accurate, whereas locomotor estimates have been found to be grossly underestimated. Until now, however, the latter have been performed on a treadmill, which might not translate to real-world walking. We investigated if the motor underestimation found on the treadmill holds up in a natural environment. Observers viewed pictures of objects at distances between 10 and 245 m and were asked to reproduce these distances in a blindfolded walking task (using passive movement or an active production method). Active and passive locomotor judgments underestimated far distances above 100 m. We conclude that underestimation of large distances does not depend on the medium (treadmill vs. real-world) but rather on the sensory modality and effort involved in the task.
Collapse
Affiliation(s)
- Heiko Hecht
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany
| | - Max Ramdohr
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany
| | - Christoph von Castell
- Psychologisches Institut, Johannes Gutenberg-Universität Mainz, Wallstraße 3, 55122, Mainz, Germany.
| |
Collapse
|
11
|
Abstract
In the literature, perspective space has been introduced as a model of visual space. Perspective space is grounded on the perspective nature of visual space during both binocular and monocular vision. A single parameter, that is, the distance of the vanishing point, transforms the geometry of physical space into that of perspective space. The perspective-space model predicts perceived angles, distances, and sizes. The model is compared with other models for distance and size perception. Perspective space predicts that perceived distance and size as a function of physical distance are described by hyperbolic functions. Alternatively, power functions have been widely used to describe perceived distance and size. Comparison of power and hyperbolic functions shows that both functions are equivalent within the range of distances that have been judged in experiments. Two models describing perceived distance on the ground plane appear to be equivalent with the perspective-space model too. The conclusion is that perspective space unifies a number of models of distance and size perception.
Collapse
Affiliation(s)
- Casper J. Erkelens
- Experimental Psychology, Helmholtz Institute, Utrecht University, The Netherlands
| |
Collapse
|
12
|
Etchemendy PE, Abregú E, Calcagno ER, Eguia MC, Vechiatti N, Iasi F, Vergara RO. Auditory environmental context affects visual distance perception. Sci Rep 2017; 7:7189. [PMID: 28775372 PMCID: PMC5543138 DOI: 10.1038/s41598-017-06495-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Accepted: 06/13/2017] [Indexed: 11/21/2022] Open
Abstract
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Collapse
Affiliation(s)
- Pablo E Etchemendy
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Ezequiel Abregú
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Esteban R Calcagno
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Manuel C Eguia
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Nilda Vechiatti
- Laboratorio de Acústica y Luminotecnia. Comisión de Investigaciones Científicas de la Provincia de Buenos Aires. Cno. Centenario e/505 y 508, M. B. Gonnet, Buenos Aires, Argentina
| | - Federico Iasi
- Laboratorio de Acústica y Luminotecnia. Comisión de Investigaciones Científicas de la Provincia de Buenos Aires. Cno. Centenario e/505 y 508, M. B. Gonnet, Buenos Aires, Argentina
| | - Ramiro O Vergara
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina.
| |
Collapse
|
13
|
Philbeck JW, Witt JK. Action-specific influences on perception and postperceptual processes: Present controversies and future directions. Psychol Bull 2015; 141:1120-44. [PMID: 26501227 PMCID: PMC4621785 DOI: 10.1037/a0039738] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The action-specific perception account holds that people perceive the environment in terms of their ability to act in it. In this view, for example, decreased ability to climb a hill because of fatigue makes the hill visually appear to be steeper. Though influential, this account has not been universally accepted, and in fact a heated controversy has emerged. The opposing view holds that action capability has little or no influence on perception. Heretofore, the debate has been quite polarized, with efforts largely being focused on supporting one view and dismantling the other. We argue here that polarized debate can impede scientific progress and that the search for similarities between 2 sides of a debate can sharpen the theoretical focus of both sides and illuminate important avenues for future research. In this article, we present a synthetic review of this debate, drawing from the literatures of both approaches, to clarify both the surprising similarities and the core differences between them. We critically evaluate existing evidence, discuss possible mechanisms of action-specific effects, and make recommendations for future research. A primary focus of future work will involve not only the development of methods that guard against action-specific postperceptual effects but also development of concrete, well-constrained underlying mechanisms. The criteria for what constitutes acceptable control of postperceptual effects and what constitutes an appropriately specific mechanism vary between approaches, and bridging this gap is a central challenge for future research.
Collapse
|
14
|
Harris LR, Carnevale MJ, D’Amour S, Fraser LE, Harrar V, Hoover AEN, Mander C, Pritchett LM. How our body influences our perception of the world. Front Psychol 2015; 6:819. [PMID: 26124739 PMCID: PMC4464078 DOI: 10.3389/fpsyg.2015.00819] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Accepted: 05/29/2015] [Indexed: 12/02/2022] Open
Abstract
Incorporating the fact that the senses are embodied is necessary for an organism to interpret sensory information. Before a unified perception of the world can be formed, sensory signals must be processed with reference to body representation. The various attributes of the body such as shape, proportion, posture, and movement can be both derived from the various sensory systems and can affect perception of the world (including the body itself). In this review we examine the relationships between sensory and motor information, body representations, and perceptions of the world and the body. We provide several examples of how the body affects perception (including but not limited to body perception). First we show that body orientation effects visual distance perception and object orientation. Also, visual-auditory crossmodal-correspondences depend on the orientation of the body: audio "high" frequencies correspond to a visual "up" defined by both gravity and body coordinates. Next, we show that perceived locations of touch is affected by the orientation of the head and eyes on the body, suggesting a visual component to coding body locations. Additionally, the reference-frame used for coding touch locations seems to depend on whether gaze is static or moved relative to the body during the tactile task. The perceived attributes of the body such as body size, affect tactile perception even at the level of detection thresholds and two-point discrimination. Next, long-range tactile masking provides clues to the posture of the body in a canonical body schema. Finally, ownership of seen body parts depends on the orientation and perspective of the body part in view. Together, all of these findings demonstrate how sensory and motor information, body representations, and perceptions (of the body and the world) are interdependent.
Collapse
Affiliation(s)
- Laurence R. Harris
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Michael J. Carnevale
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Sarah D’Amour
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lindsey E. Fraser
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Vanessa Harrar
- School of Optometry, University of Montreal, Montreal, QC, Canada
| | - Adria E. N. Hoover
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Charles Mander
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lisa M. Pritchett
- Multisensory Integration Laboratory, The Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| |
Collapse
|
15
|
Direct manipulation of perceived angular declination affects perceived size and distance: a replication and extension of Wallach and O'Leary (1982). Atten Percept Psychophys 2015; 77:1371-8. [PMID: 25791469 PMCID: PMC4415979 DOI: 10.3758/s13414-015-0864-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In two experiments involving a total of 83 participants, the effect of vertical angular optical compression on the perceived distance and size of a target on the ground was investigated. Replicating an earlier report (Wallach & O’Leary, 1982), reducing the apparent angular declination below the horizon produced apparent object width increases (by 33 %), consistent with the perception of a greater ground distance to the object. A throwing task confirmed that perceived distance was indeed altered by about 33 %. The results are discussed in relation to cue recruitment and to recent evidence of systematic bias in the perception of angular declination.
Collapse
|
16
|
Gajewski DA, Wallin CP, Philbeck JW. The Effects of Age and Set Size on the Fast Extraction of Egocentric Distance. VISUAL COGNITION 2015; 23:957-988. [PMID: 27398065 DOI: 10.1080/13506285.2015.1132803] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Angular direction is a source of information about the distance to floor-level objects that can be extracted from brief glimpses (near one's threshold for detection). Age and set size are two factors known to impact the viewing time needed to directionally localize an object, and these were posited to similarly govern the extraction of distance. The question here was whether viewing durations sufficient to support object detection (controlled for age and set size) would also be sufficient to support well-constrained judgments of distance. Regardless of viewing duration, distance judgments were more accurate (less biased towards underestimation) when multiple potential targets were presented, suggesting that the relative angular declinations between the objects are an additional source of useful information. Distance judgments were more precise with additional viewing time, but the benefit did not depend on set size and accuracy did not improve with longer viewing durations. The overall pattern suggests that distance can be efficiently derived from direction for floor-level objects. Controlling for age-related differences in the viewing time needed to support detection was sufficient to support distal localization but only when brief and longer glimpse trials were interspersed. Information extracted from longer glimpse trials presumably supported performance on subsequent trials when viewing time was more limited. This outcome suggests a particularly important role for prior visual experience in distance judgments for older observers.
Collapse
Affiliation(s)
- Daniel A Gajewski
- Department of Psychology, The George Washington University, Washington, D.C
| | - Courtney P Wallin
- Department of Psychology, The George Washington University, Washington, D.C
| | - John W Philbeck
- Department of Psychology, The George Washington University, Washington, D.C
| |
Collapse
|
17
|
Röhrich WG, Hardiess G, Mallot HA. View-based organization and interplay of spatial working and long-term memories. PLoS One 2014; 9:e112793. [PMID: 25409437 PMCID: PMC4237361 DOI: 10.1371/journal.pone.0112793] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2014] [Accepted: 10/17/2014] [Indexed: 11/24/2022] Open
Abstract
Space perception provides egocentric, oriented views of the environment from which working and long-term memories are constructed. “Allocentric” (i.e. position-independent) long-term memories may be organized as graphs of recognized places or views but the interaction of such cognitive graphs with egocentric working memories is unclear. Here we present a simple coherent model of view-based working and long-term memories, together with supporting evidence from behavioral experiments. The model predicts that within a given place, memories for some views may be more salient than others, that imagery of a target square should depend on the location where the recall takes place, and that recall favors views of the target square that would be obtained when approaching it from the current recall location. In two separate experiments in an outdoor urban environment, pedestrians were approached at various interview locations and asked to draw sketch maps of one of two well-known squares. Orientations of the sketch map productions depended significantly on distance and direction of the interview location from the target square, i.e. different views were recalled at different locations. Further analysis showed that location-dependent recall is related to the respective approach direction when imagining a walk from the interview location to the target square. The results are consistent with a view-based model of spatial long-term and working memories and their interplay.
Collapse
Affiliation(s)
- Wolfgang G Röhrich
- Cognitive Neuroscience Unit, Department of Biology, University of Tübingen, Tübingen, Germany
| | - Gregor Hardiess
- Cognitive Neuroscience Unit, Department of Biology, University of Tübingen, Tübingen, Germany
| | - Hanspeter A Mallot
- Cognitive Neuroscience Unit, Department of Biology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
18
|
Nawrot M, Ratzlaff M, Leonard Z, Stroyan K. Modeling depth from motion parallax with the motion/pursuit ratio. Front Psychol 2014; 5:1103. [PMID: 25339926 PMCID: PMC4186274 DOI: 10.3389/fpsyg.2014.01103] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Accepted: 09/11/2014] [Indexed: 11/13/2022] Open
Abstract
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.
Collapse
Affiliation(s)
- Mark Nawrot
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Michael Ratzlaff
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Zachary Leonard
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Keith Stroyan
- Math Department, University of Iowa Iowa City, IA, USA
| |
Collapse
|
19
|
Catching ease influences perceived speed: evidence for action-specific effects from action-based measures. Psychon Bull Rev 2014; 20:1364-70. [PMID: 23658059 DOI: 10.3758/s13423-013-0448-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
According to the action-specific account of perception, people perceive the environment in terms of their ability to act. Here, we directly tested this claim by using an action-based measure of perceived speed: Participants attempted to catch a virtual fish by releasing a virtual net. The net varied in size, making the task easier or harder. We measured perceived speed by using explicit judgment-based measures and an action-based measure (time to release the net). Participants released the net later when playing with the big as compared with the small net, indicating that the fish looked to be moving more slowly when participants played with the big net. Explicit judgments of fish speed were similarly influenced by net size. These results provide converging evidence from both explicit and action-based measures that a perceiver's ability to act influences a common underlying process, most likely perceived speed, rather than postperceptual processes such as response formation.
Collapse
|
20
|
Gajewski DA, Philbeck JW, Wirtz PW, Chichka D. Angular declination and the dynamic perception of egocentric distance. J Exp Psychol Hum Percept Perform 2014; 40:361-77. [PMID: 24099588 PMCID: PMC4140626 DOI: 10.1037/a0034394] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Collapse
Affiliation(s)
| | | | - Philip W. Wirtz
- Department of Psychology, The George Washington University
- Department of Decision Sciences, The George Washington University
| | - David Chichka
- Department of Mechanical and Aerospace Engineering, The George Washington University
| |
Collapse
|
21
|
Gajewski DA, Wallin CP, Philbeck JW. Gaze behavior and the perception of egocentric distance. J Vis 2014; 14:20. [PMID: 24453346 PMCID: PMC3900371 DOI: 10.1167/14.1.20] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Accepted: 11/26/2013] [Indexed: 11/24/2022] Open
Abstract
The ground plane is thought to be an important reference for localizing objects, particularly when angular declination is informative, as it is for objects seen resting at floor level. A potential role for eye movements has been implicated by the idea that information about the nearby ground is required to localize objects more distant, and by the fact that the time course for the extraction of distance extends beyond the duration of a typical eye fixation. To test this potential role, eye movements were monitored when participants previewed targets. Distance estimates were provided by walking without vision to the remembered target location (blind walking) or by verbal report. We found that a strategy of holding the gaze steady on the object was as frequent as one where the region between the observer and object was fixated. There was no performance advantage associated with making eye movements in an observational study (Experiment 1) or when an eye-movement strategy was manipulated experimentally (Experiment 2). Observers were extracting useful information covertly, however. In Experiments 3 through 5, obscuring the nearby ground plane had a modest impact on performance; obscuring the walls and ceiling was more detrimental. The results suggest that these alternate surfaces provide useful information when judging the distance to objects within indoor environments. Critically, they constrain the role for the nearby ground plane in theories of egocentric distance perception.
Collapse
Affiliation(s)
- Daniel A. Gajewski
- Department of Psychology, George Washington University, Washington, DC, USA
| | - Courtney P. Wallin
- Department of Psychology, George Washington University, Washington, DC, USA
| | - John W. Philbeck
- Department of Psychology, George Washington University, Washington, DC, USA
| |
Collapse
|
22
|
Herdtweck C, Wallraven C. Estimation of the horizon in photographed outdoor scenes by human and machine. PLoS One 2013; 8:e81462. [PMID: 24349073 PMCID: PMC3861256 DOI: 10.1371/journal.pone.0081462] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Accepted: 10/14/2013] [Indexed: 11/18/2022] Open
Abstract
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only 153 ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine "behavior" for different image manipulations and image scene types.
Collapse
Affiliation(s)
- Christian Herdtweck
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics,Tübingen, Germany
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| |
Collapse
|
23
|
Li Z, Durgin FH. Depth compression based on mis-scaling of binocular disparity may contribute to angular expansion in perceived optical slant. J Vis 2013; 13:13.12.3. [PMID: 24097046 DOI: 10.1167/13.12.3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Three studies, involving a total of 145 observers examined quantitative theories of the overestimation of perceived optical slant. The first two studies investigated the depth/width anisotropies on positive and negative slant in both pitch and yaw at 2 and 8 m using calibrated immersive virtual environments. Observers made judgments of the relative lengths of extents that were frontal with those that were in depth. The physical aspect ratio that was perceived as 1:1 was determined for each slant. The observed anisotropies can be modeled by assuming overestimation in perceived slant. Three one-parameter slant perception models (angular expansion, affine depth compression caused by mis-scaling of binocular disparity, and intrinsic bias) were compared. The angular expansion and the affine depth compression models provided significantly better fits to the aspect ratio data than the intrinsic bias model did. The affine model required depth compression at the 2 m distance; however, that was much more than the depth compression measured directly in the third study using the same apparatus. The present results suggest that depth compression based on mis-scaling of binocular disparity may contribute to slant overestimation, especially as viewing distance increases, but also suggest that a functional rather than mechanistic account may be more appropriate for explaining biases in perceived slant in near space.
Collapse
Affiliation(s)
- Zhi Li
- Psychology Department, Swarthmore College, Swarthmore, PA, USA
| | | |
Collapse
|
24
|
Linkenauger SA, Leyrer M, Bülthoff HH, Mohler BJ. Welcome to wonderland: the influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects. PLoS One 2013; 8:e68594. [PMID: 23874681 PMCID: PMC3708948 DOI: 10.1371/journal.pone.0068594] [Citation(s) in RCA: 79] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 06/03/2013] [Indexed: 11/18/2022] Open
Abstract
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.
Collapse
|
25
|
Wu J, He ZJ, Ooi TL. The visual system's intrinsic bias influences space perception in the impoverished environment. J Exp Psychol Hum Percept Perform 2013; 40:626-38. [PMID: 23750965 DOI: 10.1037/a0033034] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A dimly lit target in the intermediate distance in the dark is judged at the intersection between the target's projection line from the eye to its physical location and an implicit slanted surface, which is the visual system's intrinsic bias. We hypothesize that the intrinsic bias also contributes to perceptual space in the impoverished environment. We first showed that a target viewed against sparse texture elements delineating the horizontal ground surface in the dark is localized along an implicit slanted surface that is less slanted than that of the intrinsic bias, reflecting the weighted integration of the weak texture information and intrinsic bias. We also showed that while the judged egocentric locations are similar between 0.15- to 5-s exposure durations, the judged precision improves with duration. Furthermore, the precision for the judged target angular declination does not vary with the physical angular declination and is better than the precision of the eye-to-target distance. Second, we used both action and perceptual tasks to directly reveal the perceived surface slant. Confirming our hypothesis, we found that an L-shaped target on the horizontal ground with sparse texture information is perceived with a slant that is less than that of the intrinsic bias.
Collapse
Affiliation(s)
- Jun Wu
- Department of Psychological and Brain Sciences, University of Louisville
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville
| | - Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, Salus University
| |
Collapse
|
26
|
Perception of 3-D location based on vision, touch, and extended touch. Exp Brain Res 2012; 224:141-53. [PMID: 23070234 DOI: 10.1007/s00221-012-3295-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Accepted: 09/29/2012] [Indexed: 10/27/2022]
Abstract
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.
Collapse
|
27
|
Abstract
The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.
Collapse
|
28
|
Arthur JC, Kortte KB, Shelhamer M, Schubert MC. Linear path integration deficits in patients with abnormal vestibular afference. ACTA ACUST UNITED AC 2012; 25:155-78. [PMID: 22726251 DOI: 10.1163/187847612x629928] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Effective navigation requires the ability to keep track of one's location and maintain orientation during linear and angular displacements. Path integration is the process of updating the representation of body position by integrating internally-generated self-motion signals over time (e.g., walking in the dark). One major source of input to path integration is vestibular afference. We tested patients with reduced vestibular function (unilateral vestibular hypofunction, UVH), patients with aberrant vestibular function (benign paroxysmal positional vertigo, BPPV), and healthy participants (controls) on two linear path integration tasks: experimenter-guided walking and target-directed walking. The experimenter-guided walking task revealed a systematic underestimation of self-motion signals in UVH patients compared to the other groups. However, we did not find any difference in the distance walked between the UVH group and the control group for the target-directed walking task. Results from neuropsychological testing and clinical balance measures suggest that the errors in experimenter-guided walking were not attributable to cognitive and/or balance impairments. We conclude that impairment in linear path integration in UVH patients stem from deficits in self-motion perception. Importantly, our results also suggest that patients with a UVH deficit do not lose their ability to walk accurately without vision to a memorized target location.
Collapse
Affiliation(s)
- Joeanna C Arthur
- Department of Otolaryngology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | | | | |
Collapse
|
29
|
Rand KM, Tarampi MR, Creem-Regehr SH, Thompson WB. The influence of ground contact and visible horizon on perception of distance and size under severely degraded vision. ACTA ACUST UNITED AC 2012; 25:425-47. [PMID: 22370655 DOI: 10.1163/187847611x620946] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
For low vision navigation, misperceiving the locations of hazards can have serious consequences. Potential sources of such misperceptions are hazards that are not visually associated with the ground plane, thus, depriving the viewer of important perspective cues for egocentric distance. In Experiment 1, we assessed absolute distance and size judgments to targets on stands under degraded vision conditions. Normally sighted observers wore blur goggles that severely reduced acuity and contrast, and viewed targets placed on either detectable or undetectable stands. Participants in the detectable stand condition demonstrated accurate distance judgments, whereas participants in the undetectable stand condition overestimated target distances. Similarly, the perceived size of targets in the undetectable stand condition was judged to be significantly larger than in the detectable stand condition, suggesting a perceptual coupling of size and distance in conditions of degraded vision. In Experiment 2, we investigated size and implied distance perception of targets elevated above a visible horizon for individuals in an induced state of degraded vision. When participants' size judgments are inserted into the size-distance invariance hypothesis (SDIH) formula, distance to above-horizon objects increased compared to those below the horizon. Together, our results emphasize the importance of salient visible ground-contact information for accurate distance perception. The absence of this ground-contact information could be the source of perceptual errors leading to potential hazards for low vision individuals with severely degraded acuity and contrast sensitivity.
Collapse
Affiliation(s)
- Kristina M Rand
- University of Utah, 380 S. 1530 E., Room 502, Salt Lake City, UT 84112, USA.
| | | | | | | |
Collapse
|
30
|
Li Z, Phillips J, Durgin FH. The underestimation of egocentric distance: evidence from frontal matching tasks. Atten Percept Psychophys 2011; 73:2205-17. [PMID: 21735313 PMCID: PMC3205207 DOI: 10.3758/s13414-011-0170-2] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is controversy over the existence, nature, and cause of error in egocentric distance judgments. One proposal is that the systematic biases often found in explicit judgments of egocentric distance along the ground may be related to recently observed biases in the perceived declination of gaze (Durgin & Li, Attention, Perception, & Psychophysics, in press), To measure perceived egocentric distance nonverbally, observers in a field were asked to position themselves so that their distance from one of two experimenters was equal to the frontal distance between the experimenters. Observers placed themselves too far away, consistent with egocentric distance underestimation. A similar experiment was conducted with vertical frontal extents. Both experiments were replicated in panoramic virtual reality. Perceived egocentric distance was quantitatively consistent with angular bias in perceived gaze declination (1.5 gain). Finally, an exocentric distance-matching task was contrasted with a variant of the egocentric matching task. The egocentric matching data approximate a constant compression of perceived egocentric distance with a power function exponent of nearly 1; exocentric matches had an exponent of about 0.67. The divergent pattern between egocentric and exocentric matches suggests that they depend on different visual cues.
Collapse
Affiliation(s)
- Zhi Li
- Swarthmore College, Department of Psychology, 500 College Ave, Swarthmore, PA 19081, USA
| | - John Phillips
- Swarthmore College, Department of Psychology, 500 College Ave, Swarthmore, PA 19081, USA
| | - Frank H. Durgin
- Swarthmore College, Department of Psychology, 500 College Ave, Swarthmore, PA 19081, USA
| |
Collapse
|
31
|
Rand KM, Tarampi MR, Creem-Regehr SH, Thompson WB. The importance of a visual horizon for distance judgments under severely degraded vision. Perception 2011; 40:143-54. [PMID: 21650089 DOI: 10.1068/p6843] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In two experiments we examined the role of visual horizon information on absolute egocentric distance judgments to on-ground targets. Sedgwick [1983, in Human and Machine Vision (New York: Academic Press) pp 425-458] suggested that the visual system may utilize the angle of declination from a horizontal line of sight to the target location (horizon distance relation) to determine absolute distances on infinite ground surfaces. While studies have supported this hypothesis, less is known about the specific cues (vestibular, visual) used to determine horizontal line of sight. We investigated this question by requiring observers to judge distances under degraded vision given an unaltered or raised visual horizon. The results suggest that visual horizon information does influence perception of absolute distances as evident through two different action-based measures: walking or throwing without vision to previously viewed targets. Distances were judged as shorter in the presence of a raised visual horizon. The results are discussed with respect to how the visual system accurately determines absolute distance to objects on a finite ground plane and for their implications for understanding space perception in low-vision individuals.
Collapse
Affiliation(s)
- Kristina M Rand
- Department of Psychology, University of Utah, Salt Lake City, UT 84112, USA.
| | | | | | | |
Collapse
|
32
|
Spatial updating according to a fixed reference direction of a briefly viewed layout. Cognition 2011; 119:419-29. [PMID: 21439561 DOI: 10.1016/j.cognition.2011.02.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2009] [Revised: 12/22/2010] [Accepted: 02/04/2011] [Indexed: 11/20/2022]
Abstract
Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view that was caused by table rotation or by their own locomotion. The stick was removed at test. The results showed that detection of position change was better when an object not on the stick was moved than when an object on the stick was moved. Furthermore change detection was better in the observer locomotion condition than in the table rotation condition only when an object on the stick was moved but not when an object not on the stick was moved. These results indicated that when the reference direction was not accurately indicated in the test scene, detection of position change was impaired but this impairment was less in the observer locomotion condition. These results suggest that people not only represent objects' locations with respect to a fixed reference direction but also represent and update their orientation according to the same reference direction, which can be used to recover the accurate reference direction and facilitate detection of position change when no accurate reference direction is presented in the test scene.
Collapse
|
33
|
Gajewski DA, Philbeck JW, Pothier S, Chichka D. From the most fleeting of glimpses: on the time course for the extraction of distance information. Psychol Sci 2010; 21:1446-53. [PMID: 20732904 DOI: 10.1177/0956797610381508] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
An observer's visual perception of the absolute distance between his or her position and an object is based on multiple sources of information that must be extracted during scene viewing. Research has not yet discovered the viewing duration observers need to fully extract distance information, particularly in navigable real-world environments. In a visually directed walking task, participants showed a sensitive response to distance when they were given 9-ms glimpses of floor- and eye-level targets. However, sensitivity to distance decreased markedly when targets were presented at eye level and angular size was rendered uninformative. Performance after brief viewing durations was characterized by underestimation of distance, unless the brief-viewing trials were preceded by a block of extended-viewing trials. The results indicate that experience plays a role in the extraction of information during brief glimpses. Even without prior experience, the extraction of useful information is virtually immediate when the cues of angular size or angular declination are informative for the observer.
Collapse
Affiliation(s)
- Daniel A Gajewski
- Department of Psychology, George Washington University, Washington, DC 20052, USA.
| | | | | | | |
Collapse
|
34
|
Pagano CC, Grutzmacher RP, Jenkins JC. Comparing Verbal and Reaching Responses to Visually Perceived Egocentric Distances. ECOLOGICAL PSYCHOLOGY 2010. [DOI: 10.1207/s15326969eco1303_2] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
35
|
A comparison of blindpulling and blindwalking as measures of perceived absolute distance. Behav Res Methods 2010; 42:148-60. [PMID: 20160295 DOI: 10.3758/brm.42.1.148] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Blindwalking has become a common measure of perceived absolute distance and location, but it requires a relatively large testing space and cannot be used with people for whom walking is difficult or impossible. In the present article, we describe an alternative response type that is closely matched to blindwalking in several important respects but is less resource intensive. In the blindpulling technique, participants view a target, then close their eyes and pull a length of tape or rope between the hands to indicate the remembered target distance. As with blindwalking, this response requires integration of cyclical, bilateral limb movements over time. Blind-pulling and blindwalking responses are tightly linked across a range of viewing conditions, and blindpulling is accurate when prior exposure to visually guided pulling is provided. Thus, blindpulling shows promise as a measure of perceived distance that may be used in nonambulatory populations and when the space available for testing is limited.
Collapse
|
36
|
Abstract
In a series of experiments, it was found that emotional arousal can influence height perception. In Experiment 1, participants viewed either arousing or nonarousing images before estimating the height of a 2-story balcony and the size of a target on the ground below the balcony. People who viewed arousing images overestimated height and target size more than did those who viewed nonarousing images. However, in Experiment 2, estimates of horizontal distances were not influenced by emotional arousal. In Experiment 3, both valence and arousal cues were manipulated, and it was found that arousal, but not valence, moderated height perception. In Experiment 4, participants either up-regulated or down-regulated their emotional experience while viewing emotionally arousing images, and a control group simply viewed the arousing images. Those participants who up-regulated their emotional experience overestimated height more than did the control or down-regulated participants. In sum, emotional arousal influences estimates of height, and this influence can be moderated by emotion regulation strategies. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Collapse
|
37
|
Abstract
Blind walking has become a common measure of perceived target location. This article addresses the possibility that blind walking might vary systematically within an experimental session as participants accrue exposure to nonvisual locomotion. Such variations could complicate the interpretation of blind walking as a measure of perceived location. We measured walked distance, velocity, and pace length in indoor and outdoor environments (1.5-16.0 m target distances). Walked distance increased over 37 trials by approximately 9.33% of the target distance; velocity (and to a lesser extent, pace length) also increased, primarily in the first few trials. In addition, participants exhibited more unintentional forward drift in a blindfolded marching-in-place task after exposure to nonvisual walking. The results suggest that participants not only gain confidence as blind-walking exposure increases, but also adapt to nonvisual walking in a way that biases responses toward progressively longer walked distances.
Collapse
|
38
|
Perceived relative distance on the ground affected by the selection of depth information. ACTA ACUST UNITED AC 2008; 70:707-13. [PMID: 18556932 DOI: 10.3758/pp.70.4.707] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Our visual space does not appear to change when we scan or shift attention between locations. This appearance of stability implies that the depth information selection process is not crucial for constructing visual space. But we present evidence to the contrary. We focused on space perception in the intermediate distance, which depends on the integration of depth information on the ground. We propose a selection hypothesis that states that the integration process is influenced by where the depth information is selected. Specifically, the integration process inaccurately represents the ground when one samples depth information only from the far ground surface, instead of sequentially from the near to the far ground. To test this, observers matched the depth/length of a sagittal bar (test) to the width of a laterally oriented bar (reference) in three conditions in a full-cue environment that compelled the visual system to sample from different parts of the ground. These conditions had the lateral reference bar placed (1) adjacent to the test bar, (2) at the far ground, and (3) at the near ground. We found that the sagittal bar was perceived as shorter in conditions (1) and (2) than in Condition 3. This finding supports the selection hypothesis, since only Condition 3 led to more accurate ground surface integration/representation and less error in relative distance/depth perception. Also, we found that performances in all three conditions were similar in the dark, which has no depth information on the ground, indicating that the results cannot be attributed to asymmetric visual scanning but, rather, to differential information selection.
Collapse
|
39
|
Wu B, He ZJ, Ooi TL. Inaccurate representation of the ground surface beyond a texture boundary. Perception 2007; 36:703-21. [PMID: 17624117 PMCID: PMC4000708 DOI: 10.1068/p5693] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The sequential-surface-integration-process (SSIP) hypothesis was proposed to elucidate how the visual system constructs the ground-surface representation in the intermediate distance range (He et al, 2004 Perception 33 789-806). According to the hypothesis, the SSIP constructs an accurate representation of the near ground surface by using reliable near depth cues. The near ground representation then serves as a template for integrating the adjacent surface patch by using the texture gradient information as the predominant depth cue. By sequentially integrating the surface patches from near to far, the visual system obtains the global ground representation. A critical prediction of the SSIP hypothesis is that, when an abrupt texture-gradient change exists between the near and far ground surfaces, the SSIP can no longer accurately represent the far surface. Consequently, the representation of the far surface will be slanted upward toward the frontoparallel plane (owing to the intrinsic bias of the visual system), and the egocentric distance of a target on the far surface will be underestimated. Our previous findings in the real 3-D environment have shown that observers underestimated the target distance across a texture boundary. Here, we used the virtual-reality system to first test distance judgments with a distance-matching task. We created the texture boundary by having virtual grass- and cobblestone-textured patterns abutting on a flat (horizontal) ground surface in experiment 1, and by placing a brick wall to interrupt the continuous texture gradient of a flat grass surface in experiment 2. In both instances, observers underestimated the target distance across the texture boundary, compared to the homogeneous-texture ground surface (control). Second, we tested the proposal that the far surface beyond the texture boundary is perceived as slanted upward. For this, we used a virtual checkerboard-textured ground surface that was interrupted by a texture boundary. We found that not only was the target distance beyond the texture boundary underestimated relative to the homogeneous-texture condition, but the far surface beyond the texture boundary was also perceived as relatively slanted upward (experiment 3). Altogether, our results confirm the predictions of the SSIP hypothesis.
Collapse
Affiliation(s)
- Bing Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | - Teng Leng Ooi
- Department of Basic Sciences, Pennsylvania College of Optometry, Elkins Park, PA 19027, USA
| |
Collapse
|
40
|
Vecera SP, Palmer SE. Grounding the figure: surface attachment influences figure-ground organization. Psychon Bull Rev 2007; 13:563-9. [PMID: 17201352 DOI: 10.3758/bf03193963] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We investigated whether the lower region effect on figure-ground organization (Vecera, Vogel, and Woodman, 2002) would generalize to contextual depth planes in vertical orientations, as is predicted by a theoretical analysis based on the ecological statistics of edges arising from objects that are attached to surfaces of support. Observers viewed left/right ambiguous figure-ground displays that occluded middle sections of four types of contextual inducers: two types of attached, receding, vertical planes (walls) that used linear perspective and/or texture gradients to induce perceived depth and two types of similar trapezoidal control figures that used either uniform color or random texture to reduce or eliminate perceived depth. The results showed a reliable bias toward seeing as "figure" the side of the figure-ground display that was attached to the receding depth plane, but no such bias for the corresponding side in either of the control conditions. The results are interpreted as being consistent with the attachment hypothesis that the lower region cue to figure-ground organization results from ecological biases in edge interpretation that arise when objects are attached to supporting surfaces in the terrestrial gravitational field.
Collapse
Affiliation(s)
- Shaun P Vecera
- Department of Psychology, University of Iowa, E11 Seashore Hall, Iowa City, IA 52242-1407, USA.
| | | |
Collapse
|
41
|
KOJIMA T, KUSUMI T. COMPUTING POSITIONS INDICATED BY SPATIAL TERMS IN THREE-DIMENSIONAL SPACE. PSYCHOLOGIA 2007. [DOI: 10.2117/psysoc.2007.203] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
42
|
Tcheang L, Gilson SJ, Glennerster A. Systematic distortions of perceptual stability investigated using immersive virtual reality. Vision Res 2005; 45:2177-89. [PMID: 15845248 PMCID: PMC2833395 DOI: 10.1016/j.visres.2005.02.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2004] [Revised: 02/02/2005] [Accepted: 02/02/2005] [Indexed: 11/28/2022]
Abstract
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space.
Collapse
Affiliation(s)
- Lili Tcheang
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | - Stuart J. Gilson
- University Laboratory of Physiology, Parks Road, Oxford, OX1 3PT
| | | |
Collapse
|
43
|
Abstract
As we move through space, stationary objects around us show motion parallax: their directions relative to us change at different rates, depending on their distance. Does the brain incorporate parallax when it updates its stored representations of space? We had subjects fixate a distant target and then we flashed lights, at different distances, onto the retinal periphery. Subjects translated sideways while keeping their gaze on the distant target, and then they looked to the remembered location of the flash. Their responses corrected almost perfectly for parallax: they turned their eyes farther for nearer targets, in the predicted nonlinear patterns. Computer simulations suggest a neural mechanism in which feedback about self-motion updates remembered locations of objects within an internal map of three-dimensional visual space.
Collapse
|
44
|
Loomis J, Beall A. Visually Controlled Locomotion: Its Dependence on Optic Flow, Three-Dimensional Space Perception, and Cognition. ECOLOGICAL PSYCHOLOGY 1998. [DOI: 10.1207/s15326969eco103&4_6] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|