1
|
Chen Y, He ZJ, Ooi TL. Factors Affecting Stimulus Duration Threshold for Depth Discrimination of Asynchronous Targets in the Intermediate Distance Range. Invest Ophthalmol Vis Sci 2024; 65:36. [PMID: 39446355 PMCID: PMC11512565 DOI: 10.1167/iovs.65.12.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 10/04/2024] [Indexed: 10/28/2024] Open
Abstract
Purpose Binocular depth discrimination in the near distance range (< 2 m) improves with stimulus duration. However, whether the same response-pattern holds in the intermediate distance range (approximately 2-25 m) remains unknown because the spatial coding mechanisms are thought to be different. Methods We used the two-interval forced choice procedure to measure absolute depth discrimination of paired asynchronous targets (3, 6, or 16 arc min). The paired targets (0.2 degrees) were located over a distance and height range, respectively, of 4.5 to 7.0 m and 0.15 to 0.7 m. Experiment 1 estimated duration thresholds for binocular depth discrimination at varying target durations (40-1610 ms), in the presence of a 2 × 6 array of parallel texture-elements spanning 1.5 × 5.83 m on the floor. The texture-elements provided a visible background in the light-tight room (9 × 3 m). Experiment 2 used a similar setup to control for viewing conditions: binocular versus monocular and with versus without texture background. Experiment 3 compared binocular depth discrimination between brief (40, 80, and 125 ms) and continuous texture background presentation. Results Stimulus duration threshold for depth discrimination decreased with increasing disparity in experiment 1. Experiment 2 revealed depth discrimination performance with texture background was near chance level with monocular viewing. Performance with binocular viewing degraded without texture background. Experiment 3 showed continuous texture background presentation enhances binocular depth discrimination. Conclusions Absolute depth discrimination improves with target duration, binocular viewing, and texture background. Performance further improved with longer background duration underscoring the role of ground surface representation in spatial coding.
Collapse
Affiliation(s)
- Yiya Chen
- College of Optometry, The Ohio State University, Columbus, Ohio, United States
| | - Zijiang J. He
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, United States
| | - Teng Leng Ooi
- College of Optometry, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
2
|
Dong B, Qian Q, Chen A, Wu Q, Gu Z, Zhou X, Liang X, Pan JS, Zhang M. The allocentric nature of ground-surface representation: A study of depth and location perception. Vision Res 2024; 223:108462. [PMID: 39111102 DOI: 10.1016/j.visres.2024.108462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 07/15/2024] [Accepted: 07/23/2024] [Indexed: 09/09/2024]
Abstract
When observers perceive 3D relations, they represent depth and spatial locations with the ground as a reference. This frame of reference could be egocentric, that is, moving with the observer, or allocentric, that is, remaining stationary and independent of the moving observer. We tested whether the representation of relative depth and of spatial location took an egocentric or allocentric frame of reference in three experiments, using a blind walking task. In Experiments 1 and 2, participants either observed a target in depth, and then straightaway blind walked for the previously seen distance between the target and the self; or walked to the side or along an oblique path for 3 m and then started blind walking for the previously seen distance. The difference between the conditions was whether blind walking started from the observation point. Results showed that blind walking distance varied with the starting locations. Thus, the represented distance did not seem to go through spatial updating with the moving observer and the frame of reference was likely allocentric. In Experiment 3, participants observed a target in space, then immediately blind walked to the target, or blind walked to another starting point and then blind walked to the target. Results showed that the end location of blind walking was different for different starting points, which suggested the representation of spatial location is likely to take an allocentric frame of reference. Taken together, these experiments convergingly suggested that observers used an allocentric frame of reference to construct their mental space representation.
Collapse
Affiliation(s)
- Bo Dong
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
| | - Qinyue Qian
- Department of Psychology, Soochow University, Suzhou, China
| | - Airui Chen
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China
| | - Qiong Wu
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zhengyin Gu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Xinyan Zhou
- School of Humanities, Jiangnan University, Wuxi, China
| | - Xuechen Liang
- Chengdu Longquanyi District Xiping Primary School, Chengdu, China
| | | | - Ming Zhang
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China; Department of Psychology, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
3
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. eLife 2024; 12:RP88095. [PMID: 39023517 PMCID: PMC11257686 DOI: 10.7554/elife.88095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This asymmetric path-integration finding in human visual space perception is reminiscent of the asymmetric spatial memory finding in desert ants, pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
Affiliation(s)
- Liu Zhou
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| | - Wei Wei
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Teng Leng Ooi
- College of Optometry, The Ohio State UniversityColumbusUnited States
| | - Zijiang J He
- Department of Psychological and Brain Sciences, University of LouisvilleLouisvilleUnited States
| |
Collapse
|
4
|
Zhou L, Wei W, Ooi TL, He ZJ. An allocentric human odometer for perceiving distances on the ground plane. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.22.533725. [PMID: 38645085 PMCID: PMC11030244 DOI: 10.1101/2023.03.22.533725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
We reliably judge locations of static objects when we walk despite the retinal images of these objects moving with every step we take. Here, we showed our brains solve this optical illusion by adopting an allocentric spatial reference frame. We measured perceived target location after the observer walked a short distance from the home base. Supporting the allocentric coding scheme, we found the intrinsic bias 1, 2 , which acts as a spatial reference frame for perceiving location of a dimly lit target in the dark, remained grounded at the home base rather than traveled along with the observer. The path-integration mechanism responsible for this can utilize both active and passive (vestibular) translational motion signals, but only along the horizontal direction. This anisotropic path-integration finding in human visual space perception is reminiscent of the anisotropic spatial memory finding in desert ants 3 , pointing to nature's wondrous and logically simple design for terrestrial creatures.
Collapse
|
5
|
Sedgwick HA. J. J. Gibson's "Ground Theory of Space Perception". Iperception 2021; 12:20416695211021111. [PMID: 34377427 PMCID: PMC8334293 DOI: 10.1177/20416695211021111] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 05/11/2021] [Indexed: 11/25/2022] Open
Abstract
J. J. Gibson's ground theory of space perception is contrasted with Descartes' theory, which reduces all of space perception to the perception of distance and angular direction, relative to an abstract viewpoint. Instead, Gibson posits an embodied perceiver, grounded by gravity, in a stable layout of realistically textured, extended surfaces and more delimited objects supported by these surfaces. Gibson's concept of optical contact ties together this spatial layout, locating each surface relative to the others and specifying the position of each object by its location relative to its surface of support. His concept of surface texture-augmented by perspective structures such as the horizon-specifies the scale of objects and extents within this layout. And his concept of geographical slant provides surfaces with environment-centered orientations that remain stable as the perceiver moves around. Contact-specified locations on extended environmental surfaces may be the unattended primitives of the visual world, rather than egocentric or allocentric distances. The perception of such distances may best be understood using Gibson's concept of affordances. Distances may be perceived only as needed, bound through affordances to the particular actions that require them.
Collapse
|
6
|
McCann BC, Hayhoe MM, Geisler WS. Contributions of monocular and binocular cues to distance discrimination in natural scenes. J Vis 2018; 18:12. [PMID: 29710302 PMCID: PMC5901372 DOI: 10.1167/18.4.12] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 02/19/2018] [Indexed: 01/28/2023] Open
Abstract
Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°). Measurements were made for binocular and monocular viewing. Thresholds for binocular viewing were quite small at all distances (Weber fractions less than 1% at 2° spacing and less than 4% at 10° spacing). Thresholds for monocular viewing were higher than those for binocular viewing out to distances of 15-20 m, beyond which they were the same. Using standard cue-combination analysis, we also estimated what the thresholds would be based on binocular-stereo cues alone. With two exceptions, we show that the entire pattern of results is consistent with what one would expect from classical studies of binocular disparity thresholds and separation/size discrimination thresholds measured with simple laboratory stimuli. The first exception is some deviation from the expected pattern at close distances (especially for monocular viewing). The second exception is that thresholds in natural scenes are lower, presumably because of the rich figural cues contained in natural images.
Collapse
Affiliation(s)
- Brian C McCann
- Texas Advanced Computing Center, Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Wilson S Geisler
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
7
|
Abstract
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one's ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche.
Collapse
Affiliation(s)
- Liu Zhou
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Chenglong Deng
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Teng Leng Ooi
- College of Optometry, The Ohio State University, Columbus, Ohio 43210, USA
| | - Zijiang J He
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.,Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA.,CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| |
Collapse
|
8
|
Zhou L, Ooi TL, He ZJ. Intrinsic spatial knowledge about terrestrial ecology favors the tall for judging distance. SCIENCE ADVANCES 2016; 2:e1501070. [PMID: 27602402 PMCID: PMC5007070 DOI: 10.1126/sciadv.1501070] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 08/02/2016] [Indexed: 06/06/2023]
Abstract
Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers' heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual's accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.
Collapse
Affiliation(s)
- Liu Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Teng Leng Ooi
- College of Optometry, Ohio State University, Columbus, OH 43210, USA
| | - Zijiang J. He
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Science and Technology Commission of Shanghai Municipality), Institute of Cognitive Neurosciences, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
9
|
Abstract
Distance is commonly underperceived by up to 50 % in virtual environments (VEs), in contrast to relatively accurate real world judgments. Experiments reported by Geuss, Stefanucci, Creem-Regehr, and Thompson (Journal of Experimental Psychology: Human Perception and Performance, 38, 1242-1253, 2012) indicate that the exocentric distance separating two objects in a VE is underperceived when the objects are oriented in the sagittal plane (depth extents), but veridically perceived when oriented in a frontoparallel plane (frontal extents). The authors conclude that "distance underestimation in the [VE] generalizes to intervals in the depth plane, but not to intervals in the frontal plane." The current experiment evaluated an alternative hypothesis that the accurate judgments of frontal extents reported by Geuss et al. were due to a fortunate balance of underperception caused by the VE and overperception of frontal relative to depth extents. Participants judged frontal and depth extents in the classroom VE used by Geuss et al. and in a sparser VE containing only a grass-covered ground plane. Judgments in the classroom VE replicated findings by Geuss et al., but judgments in the grass VE show underperception of both depth and frontal extents, indicating that frontal extents are not immune to underperception in VEs.
Collapse
|
10
|
Ooi TL, He ZJ. Space perception of strabismic observers in the real world environment. Invest Ophthalmol Vis Sci 2015; 56:1761-8. [PMID: 25698702 PMCID: PMC4358738 DOI: 10.1167/iovs.14-15741] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 02/06/2015] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Space perception beyond the near distance range (>2 m) is important for target localization, and for directing and guiding a variety of daily activities, including driving and walking. However, it is unclear whether the absolute (egocentric) localization of a single target in the intermediate distance range requires binocular vision, and if so, whether having subnormal stereopsis in strabismus impairs one's ability to localize the target. METHODS We investigated this by measuring the perceived absolute location of a target by observers with normal binocular vision (n = 8; mean age, 24.5 years) and observers with strabismus (n = 8; mean age, 24.9 years) under monocular and binocular conditions. The observers used the blind walking-gesturing task to indicate the judged location of a target located at various viewing distances (2.73-6.93 m) and heights (0, 30, and 90 cm) above the floor. Near stereopsis was assessed with the Randot Stereotest. RESULTS Both groups of observers accurately judged the absolute distance of the target on the ground (height = 0 cm) either with monocular or binocular viewing. However, when the target was suspended in midair, the normal observers accurately judged target location with binocular viewing, but not with monocular viewing (mean slant angle, 0.8° ± 0.5° vs. 7.4° ± 1.4°; P < 0.001, with a slant angle of 0° representing accurate localization). In contrast, the strabismic observers with poorer stereo acuity made larger errors in target localization in both viewing conditions, though with fewer errors during binocular viewing (mean slant angle, 2.7° ± 0.4° vs. 9.2° ± 1.3°; P < 0.0025). Further analysis reveals the localization error, that is, slant angle, correlates positively with stereo threshold during binocular viewing (r(2) = 0.479, P < 0.005), but not during monocular viewing (r(2) = 0.0002, P = 0.963). CONCLUSIONS Locating a single target on the ground is sufficient with monocular depth information, but binocular depth information is required when the target is suspended in midair. Since the absolute binocular disparity information of the single target is weak beyond 2 m, we suggest the visual system localizes the single target using the relative binocular disparity information between the midair target and the visible ground surface. Consequently, strabismic observers with residual stereopsis localize a target more accurately than their counterparts without stereo ability.
Collapse
Affiliation(s)
- Teng Leng Ooi
- The Ohio State University, Columbus, Ohio, United States
| | - Zijiang J. He
- University of Louisville, Louisville, Kentucky, United States
| |
Collapse
|