1
|
Zhao H, Straub D, Rothkopf CA. People learn a two-stage control for faster locomotor interception. PSYCHOLOGICAL RESEARCH 2024; 88:167-186. [PMID: 37083875 PMCID: PMC10806002 DOI: 10.1007/s00426-023-01826-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 04/03/2023] [Indexed: 04/22/2023]
Abstract
People can use the constant target-heading (CTH) strategy or the constant bearing (CB) strategy to guide their locomotor interception. But it is still unclear whether people can learn new interception behavior. Here, we investigated how people learn to adjust their steering to intercept targets faster. Participants steered a car to intercept a moving target in a virtual environment similar to a natural open field. Their baseline interceptions were better accounted for by the CTH strategy. After five learning sessions across multiple days, in which participants received feedback about their interception durations, they adopted a two-stage control: a quick initial burst of turning accompanied by an increase of the target-heading angle during early interception was followed by significantly less turning with small changes in target-heading angle during late interception. The target's bearing angle did not only show this two-stage pattern but also changed comparatively little during late interception, leaving it unclear which strategy participants had adopted. In a following test session, the two-stage pattern of participants' turning adjustment and the target-heading angle transferred to new target conditions and a new environment without visual information about an allocentric reference frame, which should preclude participants from using the CB strategy. Indeed, the pattern of the target's bearing angle did not transfer to all the new conditions. These results suggest that participants learned a two-stage control for faster interception: they learned to quickly increase the target-heading angle during early interception and subsequently follow the CTH strategy during late interception.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China.
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.
| | - Dominik Straub
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
2
|
Hibbard PB. Virtual Reality for Vision Science. Curr Top Behav Neurosci 2023; 65:131-159. [PMID: 36723780 DOI: 10.1007/7854_2023_416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Virtual reality (VR) allows us to create visual stimuli that are both immersive and reactive. VR provides many new opportunities in vision science. In particular, it allows us to present wide field-of-view, immersive visual stimuli; for observers to actively explore the environments that we create; and for us to understand how visual information is used in the control of behaviour. In contrast with traditional psychophysical experiments, VR provides much greater flexibility in creating environments and tasks that are more closely aligned with our everyday experience. These benefits of VR are of particular value in developing our theories of the behavioural goals of the visual system and explaining how visual information is processed to achieve these goals. The use of VR in vision science presents a number of technical challenges, relating to how the available software and hardware limit our ability to accurately specify the visual information that defines our virtual environments and the interpretation of data gathered in experiments with a freely moving observer in a responsive environment.
Collapse
Affiliation(s)
- Paul B Hibbard
- Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
3
|
Characterisation of visual guidance of steering to intercept targets following curving trajectories using Qualitative Inconsistency Detection. Sci Rep 2022; 12:20246. [PMID: 36424412 PMCID: PMC9691627 DOI: 10.1038/s41598-022-24625-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 11/17/2022] [Indexed: 11/26/2022] Open
Abstract
This study explored the informational variables guiding steering behaviour in a locomotor interception task with targets moving along circular trajectories. Using a new method of analysis focussing on the temporal co-evolution of steering behaviour and the potential information sources driving it, we set out to invalidate reliance on plausible informational candidates. Applied to individual trials rather than ensemble averages, this Qualitative Inconsistency Detection (QuID) method revealed that steering behaviour was not compatible with reliance on information grounded in any type of change in the agent-centred target-heading angle. First-order changes in the environment-centred target's bearing angle could also not adequately account for the variations in behaviour observed under the different experimental conditions. Capturing the observed timing of unfolding steering behaviour ultimately required a combination of (velocity-based) first-order and (acceleration-based) second-order changes in bearing angle. While this result may point to reliance on fractional-order based changes in bearing angle, the overall importance of the present findings resides in the demonstration of the necessity to break away from the existing practice of trying to fit behaviour into a priori postulated functional strategies based on categorical differences between operative heuristic rules or control laws.
Collapse
|
4
|
Developmental changes in gaze patterns in response to radial optic flow in toddlerhood and childhood. Sci Rep 2022; 12:11566. [PMID: 35799054 PMCID: PMC9262903 DOI: 10.1038/s41598-022-15730-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 06/28/2022] [Indexed: 11/11/2022] Open
Abstract
A large field visual motion pattern (optic flow) with a radial pattern provides a compelling perception of self-motion; a radially expanding/contracting optic flow generates the perception of forward/backward locomotion. Moreover, the focus of a radial optic flow, particularly an expansive flow, is an important visual cue to perceive and control the heading direction during human locomotion. Previous research has shown that human gaze patterns have an “expansion bias”: a tendency to be more attracted to the focus of expansive flow than to the focus of contractive flow. We investigated the development of the expansion bias in children (N = 240, 1–12 years) and adults (N = 20). Most children aged ≥ 5 years and adults showed a significant tendency to shift their gaze to the focus of an expansive flow, whereas the youngest group (1-year-old children) showed a significant but opposing tendency; their gaze was more attracted to the focus of contractive flow than to the focus of expansive flow. The relationship between the developmental change from the “contraction bias” in early toddlerhood to the expansion bias in the later developmental stages and possible factors (e.g., global visual motion processing abilities and locomotor experiences) are discussed.
Collapse
|
5
|
Visual guidance of locomotor interception is based on nulling changes in target bearing (not egocentric target direction nor target-heading angle). Hum Mov Sci 2022; 82:102929. [PMID: 35121367 DOI: 10.1016/j.humov.2022.102929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 12/09/2021] [Accepted: 01/19/2022] [Indexed: 11/23/2022]
Abstract
In two experiments we studied how participants steer to intercept uniformly moving targets in a virtual driving task under hypotheses-differentiating conditions of initial target eccentricity and target motion. In line with our re-analysis of findings from earlier studies, in both experiments the observed interception behavior could not be understood as resulting from reliance on (changes in) egocentric target direction nor from reliance on (changes in) target-heading angle. The overall pattern of results observed was however compatible with a control strategy based on nulling changes in the target's bearing angle. The presence of reversals in movement direction under specific combinations of target eccentricity and motion conditions indicated that the information used was not purely rate-of-change (i.e., first-order) based but carried traces of an influence of initial target position. In Experiment 2 we explicitly tested the potential role of early reliance on perceived egocentric target direction by examining the effects of a 10° rotation of the visual scene (i.e., of both target and environment). While such a rotation gave rise to minor changes in the moment of initiation of the first steering action, contrary to predictions it did not affect the characteristics of the direction-reversal phenomenon. We conclude that the visual guidance of locomotor interception is best understood as resulting from nulling changes in the target's bearing angle, with such nulling perhaps best conceived as being fractional-order (rather integer-order) driven.
Collapse
|
6
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker’s visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body’s trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker’s instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body’s momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior. We recorded the full body kinematics and binocular gaze of humans walking through real-world natural environment and estimated visual motion (optic flow) using both computational video analysis and geometric simulation. Contrary to the established theories of the role of optic flow in the control of locomotion, we found that eye-movement-free, head-centric optic flow is highly unstable due to the complex phasic trajectory of the head during natural locomotion, rendering it an unlikely candidate for heading perception. In contrast, retina-centered optic flow consisted of a regular pattern of outflowing motion centered on the fovea. Retinal optic flow contained highly consistent patterns that specified the walker’s trajectory relative to the point of fixation, which may provide powerful, retinotopic cues that may be used for the visual control of locomotion in natural environments. This examination of optic flow in real-world contexts suggest a need to re-evaluate existing theories of the role of optic flow in the visual control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
7
|
Xing X, Saunders JA. Different generalization of fast and slow visuomotor adaptation across locomotion and pointing tasks. Exp Brain Res 2021; 239:2859-2871. [PMID: 34292343 DOI: 10.1007/s00221-021-06112-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/09/2021] [Indexed: 11/25/2022]
Abstract
Sensorimotor adaptation can involve multiple learning processes with different time courses, and these processes may have different patterns of transfer. In this study, we tested how slow learning and fast learning transfer across tasks, and the specificity of transfer. We tested two natural goal-directed tasks: pointing and walking toward a visible target. We also tested a novel "hand locomotion" task in which subjects used pointing movements to cause simulated self-motion in virtual reality. The hand locomotion task used the same physical movement as pointing, but performed the same function as stepping. During an experimental block, subjects performed alternating training trials with perturbed visual feedback and test trials with no feedback. The test trials were either the same task to measure adaptation, or a different task to measure transfer. Perturbations on adaptation trials varied over time as a sum of sinusoids with different frequencies. Fast learning would be expected to produce equal responses to fast and slow perturbations, while slower learning would dampen responses to higher frequency perturbations. Subjects were generally not aware of the smoothly varying perturbations, but showed detectable adaptation for all three tasks. Only pointing produced significantly different responses to high- and low-frequency perturbations, consistent with slow learning. Adaptation of pointing produced more transfer to the hand locomotion task, which shared the same effector and motor actions, than to the stepping task. The other tasks showed fast learning but little or no slow learning, and equal transfer to tasks with different effector or function. Our results suggest that the slower components of sensorimotor adaptation are more movement specific, while faster learning is more generalizable.
Collapse
Affiliation(s)
- Xing Xing
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR
| | - Jeffrey A Saunders
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR.
| |
Collapse
|
8
|
Wang C, Wang G, Lu A, Zhao Y. Effects of Attentional Control on Gait and Inter-Joint Coordination During Dual-Task Walking. Front Psychol 2021; 12:665175. [PMID: 34366983 PMCID: PMC8334006 DOI: 10.3389/fpsyg.2021.665175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 06/16/2021] [Indexed: 11/13/2022] Open
Abstract
In the process of walking, attentional resources are flexibly allocated to deal with varying environmental constraints correlated with attentional control (AC). A dual-task paradigm was used to investigate the effects of AC on gait and inter-joint coordination. Fifty students volunteered to participate in this study. Based on the reaction time (RT) in the Stroop task, the top 15 participants were assigned to the High Attentional Control (HAC) group, while the last 15 participants were assigned to the Low Attentional Control (LAC) group. The participants in the two groups were randomly asked to perform three tasks: (i) single 2-back working memory task (ST 2-back); (ii) single walking task (ST walking); and (iii) dual task (DT). Cognitive outcomes and gait spatiotemporal parameters were measured. Continuous relative phase (CRP), derived from phase angles of two adjacent joints, was used to assess inter-joint coordination. The LAC group exhibited significant task effects regarding RT, correct rate (CR), step width, gait cycle, step time, forefoot contact times, heel-forefoot times, hip-knee mean absolute relative phase (MARP), and deviation phase (DP) in the stance and swing phases (p < 0.05). In the HAC group, significant task effects were only detected in RT and foot progression angle of the left foot (p < 0.05). Under the three task conditions, the LAC group exhibited a higher CR in ST, longer heel contact times, and longer heel-forefoot times when compared with the LAC group (p < 0.05). Compared with the LAC group, the HAC group exhibited significantly smaller (closer to zero) MARP and weaker hip-knee DP values in the swing phase across all gait conditions (p < 0.05). In the stance phase, the HAC group had smaller MARP (closer to zero) values when compared with the LAC group (p < 0.05). In conclusion, the ability to maintain gait control and modulate inter-joint coordination patterns in young adults is affected by the level of attentional control in accommodating gait disturbances. AC is correlated with the performance of motor control, which theoretically supports the competitive selection of athletes and fall prevention strategies for a specific population.
Collapse
Affiliation(s)
- Cenyi Wang
- School of Physical Education and Sports Science, Soochow University, Suzhou, China
| | - Guodong Wang
- School of Physical Education and Sports Science, Soochow University, Suzhou, China
| | - Aming Lu
- School of Physical Education and Sports Science, Soochow University, Suzhou, China
| | - Ying Zhao
- School of Physical Education and Sports Science, Soochow University, Suzhou, China
| |
Collapse
|
9
|
Gaze behavior during pedestrian interactions in a community environment: a real-world perspective. Exp Brain Res 2021; 239:2317-2330. [PMID: 34091697 DOI: 10.1007/s00221-021-06145-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 05/27/2021] [Indexed: 10/21/2022]
Abstract
Locomotor adaptations, as required for community walking, rely heavily on the sense of vision. Little is known, however, about gaze behavior during pedestrian interactions while ambulating in the community. Our objective was to characterize gaze behavior while walking in a community environment and interacting with pedestrians of different locations and directions. Twelve healthy young individuals were assessed as they walked in a shopping mall from a pre-set location to a goal located 20 m ahead. Eye movements were recorded with a binocular eye-tracker and temporal distance factors were assessed using wearable sensors from a full-body motion capture system. Participants exhibited more numerous and longer gaze episodes on pedestrians (GEP) that were walking in the same direction as themselves vs. those that were in the opposite direction. The relative durations of GEPs, however, showed no significant differences between pedestrians walking in the same vs. opposite direction. Longer durations of GEPs were also observed for centrally located pedestrians compared to those located on either side, but this was the case only for pedestrians that were walking in the same direction as participants. In addition, pedestrians in the centre, and even more so those on the right, were fixated at farther distances compared to those on the left. Results indicate that healthy young individuals modulate their gaze behavior as a function of the location and direction of pedestrians when ambulating in a community environment. The observed modulation is interpreted as being caused by an interplay between collision risk, pedestrian visibility, presence of leaders and social conventions (right-sided circulation). Present results also establish baseline measures for the quantification of defective visuomotor strategies in individuals with mobility disorders.
Collapse
|
10
|
Zhao H, Straub D, Rothkopf CA. How do people steer a car to intercept a moving target: Interceptions in different environments point to one strategy. Q J Exp Psychol (Hove) 2021; 74:1686-1696. [PMID: 33749396 DOI: 10.1177/17470218211007480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Which strategy people use to guide locomotor interception remains unclear despite considerable research and the importance of an answer with ramification into the heuristics and biases debate. Because the constant bearing (CB) strategy corresponds to the target-heading (CTH) strategy with an additional constraint, these two strategies can be confounded experimentally. But, the two strategies are distinct in the information they require: while the CTH strategy only requires access to the relative angle between the direction of motion and the target, the CB strategy requires access to a stable allocentric reference frame. Here, we manipulated the visual information about allocentric reference frames in three virtual environments and asked participants to steer a car to intercept a moving target. Participants' interception paths showed different degrees of curvature and their target-heading angles were approximately constant, consistent with the CTH strategy. By contrast, the target's bearing angle continuously changed in all participants except one. This particular participant produced linear interception paths with little change in the target's bearing angle, seemingly consistent with both strategies. This participant continued this pattern of steering even in the environment without any visual information about allocentric reference frames. Therefore, this pattern of steering is attributed to the CTH strategy rather than the CB strategy. The overall results add important evidence for the conclusion that locomotor interception is better accounted for by the CTH strategy and that experimentally observing a straight interception trajectory with a CB angle is not sufficient evidence for the CB strategy.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.,Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Dominik Straub
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
11
|
Warren WH. Information Is Where You Find It: Perception as an Ecologically Well-Posed Problem. Iperception 2021; 12:20416695211000366. [PMID: 33815740 PMCID: PMC7995459 DOI: 10.1177/20416695211000366] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/16/2021] [Indexed: 11/16/2022] Open
Abstract
Texts on visual perception typically begin with the following premise: Vision is an ill-posed problem, and perception is underdetermined by the available information. If this were really the case, however, it is hard to see how vision could ever get off the ground. James Gibson's signal contribution was his hypothesis that for every perceivable property of the environment, however subtle, there must be a higher order variable of information, however complex, that specifies it-if only we are clever enough to find them. Such variables are informative about behaviorally relevant properties within the physical and ecological constraints of a species' niche. Sensory ecology is replete with instructive examples, including weakly electric fish, the narwal's tusk, and insect flight control. In particular, I elaborate the case of passing through gaps. Optic flow is sufficient to control locomotion around obstacles and through openings. The affordances of the environment, such as gap passability, are specified by action-scaled information. Logically ill-posed problems may thus, on closer inspection, be ecologically well-posed.
Collapse
|
12
|
Rogers B. Optic Flow: Perceiving and Acting in a 3-D World. Iperception 2021; 12:2041669520987257. [PMID: 33613957 PMCID: PMC7869175 DOI: 10.1177/2041669520987257] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 12/18/2020] [Indexed: 11/15/2022] Open
Abstract
In 1979, James Gibson completed his third and final book "The Ecological Approach to Visual Perception". That book can be seen as the synthesis of the many radical ideas he proposed over the previous 30 years - the concept of information and its sufficiency, the necessary link between perception and action, the need to see perception in relation to an animal's particular ecological niche and the meanings (affordances) offered by the visual world. One of the fundamental concepts that lies beyond all of Gibson's thinking is that of optic flow: the constantly changing patterns of light that reach our eyes and the information it provides. My purpose in writing this paper has been to evaluate the legacy of Gibson's conceptual ideas and to consider how his ideas have influenced and changed the way we study perception.
Collapse
|
13
|
Macuga KL, Beall AC, Smith RS, Loomis JM. Visual control of steering in curve driving. J Vis 2020; 19:1. [PMID: 31042254 DOI: 10.1167/19.5.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This pair of studies investigated steering in the absence of continuous visual information. In a driving simulator, participants steered a curving path that was displayed either continuously or intermittently. Optic flow conditions were manipulated to alter the nature of the heading information with respect to the path being steered. Removing or biasing heading information had little effect on steering even during long and frequent path occlusions as long as turn rate was available. This demonstrates that participants can use intermittent views of the path to plan their steering actions and optic flow to accurately update vehicle turns with respect to that path.
Collapse
Affiliation(s)
- Kristen L Macuga
- School of Psychological Science, Oregon State University, Corvallis, OR, USA
| | - Andrew C Beall
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Roy S Smith
- Department of Information Technology and Electrical Engineering, Swiss Federal Institute of Technology, Zürich, Switzerland
| | - Jack M Loomis
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
14
|
Tsutsui K, Shinya M, Kudo K. Human Navigational Strategy for Intercepting an Erratically Moving Target in Chase and Escape Interactions. J Mot Behav 2019; 52:750-760. [PMID: 31790635 DOI: 10.1080/00222895.2019.1692331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Pursuit and interception of moving targets are fundamental skills of many animal species. Although previous studies in human interception behaviors have proposed several navigational strategies for intercepting a moving target, it is still unknown which navigational strategy humans use in chase-and-escape interactions. In the present experimental study, by using two one-on-one tasks as seen in ball sports, we showed that human interception behaviors were statistically consistent with a time-optimal model. Our results provide the insight about the navigational strategy for intercepting a moving target in chase-and-escape interactions, which may be common across species.
Collapse
Affiliation(s)
- Kazushi Tsutsui
- Graduate School of Arts and Sciences, The University of Tokyo, Toyko, Japan
| | - Masahiro Shinya
- Graduate School of Arts and Sciences, The University of Tokyo, Toyko, Japan.,Graduate School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Toyko, Japan.,Graduate School of Interdisciplinary Information Studies, The University of Tokyo, Toyko, Japan
| |
Collapse
|
15
|
Zhao H, Straub D, Rothkopf CA. The visual control of interceptive steering: How do people steer a car to intercept a moving target? J Vis 2019; 19:11. [PMID: 31830240 DOI: 10.1167/19.14.11] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visually guided interception of a moving target is a fundamental visuomotor task that humans can do with ease. But how humans carry out this task is still unclear despite numerous empirical investigations. Measurements of angular variables during human interception have suggested three possible strategies: the pursuit strategy, the constant bearing angle strategy, and the constant target-heading strategy. Here, we review previous experimental paradigms and show that some of them do not allow one to distinguish among the three strategies. Based on this analysis, we devised a virtual driving task that allows investigating which of the three strategies best describes human interception. Crucially, we measured participants' steering, head, and gaze directions over time for three different target velocities. Subjects initially aligned head and gaze in the direction of the car's heading. When the target appeared, subjects centered their gaze on the target, pointed their head slightly off the heading direction toward the target, and maintained an approximately constant target-heading angle, whose magnitude varied across participants, while the target's bearing angle continuously changed. With a second condition, in which the target was partially occluded, we investigated several alternative hypotheses about participants' visual strategies. Overall, the results suggest that interceptive steering is best described by the constant target-heading strategy and that gaze and head are coordinated to continuously acquire visual information to achieve successful interception.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Dominik Straub
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technical University Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Germany
| |
Collapse
|
16
|
No Evidence That Frontal Optical Flow Affects Perceived Locomotor Speed and Locomotor Biomechanics When Running on a Treadmill. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9214589] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We investigated how the presentation and the manipulation of an optical flow while running on a treadmill affect perceived locomotor speed (Experiment 1) and gait parameters (Experiment 2). In Experiment 1, 12 healthy participants were instructed to run at an imposed speed and to focus on their sensorimotor sensations to be able to reproduce this running speed later. After a pause, they had to retrieve the reference locomotor speed by manipulating the treadmill speed while being presented with different optical flow conditions, namely no optical flow or a matching/slower/faster optical flow. In Experiment 2, 20 healthy participants ran at a previously self-selected constant speed while being presented with different optical flow conditions (see Experiment 1). The results did not show any effect of the presence and manipulation of the optical flow either on perceived locomotor speed or on the biomechanics of treadmill running. Specifically, the ability to retrieve the reference locomotor speed was similar for all optical flow conditions. Manipulating the speed of the optical flow did not affect the spatiotemporal gait parameters and also failed to affect the treadmill running accommodation process. Nevertheless, the virtual reality conditions affected the heart rate of the participants but without affecting perceived effort.
Collapse
|
17
|
van Andel S, McGuckian TB, Chalkley D, Cole MH, Pepping GJ. Principles of the Guidance of Exploration for Orientation and Specification of Action. Front Behav Neurosci 2019; 13:231. [PMID: 31636549 PMCID: PMC6788258 DOI: 10.3389/fnbeh.2019.00231] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 09/17/2019] [Indexed: 11/13/2022] Open
Abstract
To control movement of any type, the neural system requires perceptual information to distinguish what actions are possible in any given environment. The behavior aimed at collecting this information, termed "exploration", is vital for successful movement control. Currently, the main function of exploration is understood in the context of specifying the requirements of the task at hand. To accommodate for agency and action-selection, we propose that this understanding needs to be supplemented with a function of exploration that logically precedes the specification of action requirements with the purpose of discovery of possibilities for action-action orientation. This study aimed to provide evidence for the delineation of exploration for action orientation and exploration for action specification using the principles from "General Tau Theory." Sixteen male participants volunteered and performed a laboratory-based exploration task. The visual scenes of different task-specific situations were projected on five monitors surrounding the participant. At a predetermined time, the participant received a simulated ball and was asked to respond by indicating where they would next play the ball. Head movements were recorded using inertial sensors as a measure of exploratory activity. It was shown that movement guidance characteristics varied between different head turns as participants moved from exploration for orientation to exploration for action specification. The first head turn in the trial, used for action-orientation, showed later peaks in the velocity profile and harder closure of the movement gap (gap between the start and end of the head-movement) in comparison to the later head turns. However, no differences were found between the first and the final head turn, which we hypothesized are used mainly for action orientation and specification respectively. These results are in support of differences in the function and control of head movement for discovery of opportunities for action (orientation) vs. head movement for specification of task requirements. Both are important for natural movement, yet in experimental settings,orientation is often neglected. Including both orientation and action specification in an experimental design should maximize generalizability of an experiment to natural behavior. Future studies are required to study the neural bases of movement guidance in order to better understand exploration in anticipation of movement.
Collapse
Affiliation(s)
| | | | | | | | - Gert-Jan Pepping
- School of Behavioural and Health Sciences, Australian Catholic University, Brisbane, QLD, Australia
| |
Collapse
|
18
|
Bolling L, Stein N, Steinicke F, Lappe M. Shrinking Circles: Adaptation to Increased Curvature Gain in Redirected Walking. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2032-2039. [PMID: 30794515 DOI: 10.1109/tvcg.2019.2899228] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Real walking is the most natural way to locomote in virtual reality (VR), but a confined physical walking space limits its applicability. Redirected walking (RDW) is a collection of techniques to solve this problem. One of these techniques aims to imperceptibly rotate the user's view of the virtual scene in order to steer her along a confined path whilst giving the impression of walking in a straight line in a large virtual space. Measurement of perceptual thresholds for the detection of such a modified curvature gain have indicated a radius that is still larger than most room sizes. Since the brain is an adaptive system and thresholds usually depend on previous stimulations, we tested if prolonged exposure to an immersive virtual environment (IVE) with increased curvature gain produces adaptation to that gain and modifies thresholds such that, over time, larger curvature gains can be applied for RDW. Therefore, participants first completed a measurement of their perceptual threshold for curvature gain. In a second session, the same participants were exposed to an IVE with a constant curvature gain in which they walked between two targets for about 20 minutes. Afterwards, their perceptual thresholds were measured again. The results show that the psychometric curves shifted after the exposure session and perceptual thresholds for increased curvature gain further increased. The increase of the detection threshold suggests that participants adapt to the manipulation and stronger curvature gains can be applied in RDW, and therefore improves its applicability in such situations.
Collapse
|
19
|
Macuga KL. Multisensory Influences on Driver Steering During Curve Navigation. HUMAN FACTORS 2019; 61:337-347. [PMID: 30320509 DOI: 10.1177/0018720818805898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The effects of inertial (vestibular and somatosensory) information on driver steering during curve navigation were investigated, using an electric four-wheel mobility vehicle outfitted with a steering wheel and a portable virtual reality system. BACKGROUND When driving, multiple sources of perceptual information are available. Researchers have focused on visual information, which plays a critical role in steering control. However, it is not yet well established how inertial information might contribute. METHODS I biased inertial cues by varying visual/inertial gains (doubled, halved, reversed), as drivers negotiated curving paths, and measured steering accuracy and efficiency. I also assessed whether being exposed to inertial biases had an impact on postbias steering by comparing pre- and posttest session performance measures. RESULTS Doubling or halving inertial cues had little effect on steering performance. Inertial information only disrupted steering when it was reversed with respect to visual information. Over time, the influence of this extreme inertial bias was reduced though not eliminated. Postbias curve navigation performance was not impacted, likely because participants had learned to disregard, rather than integrate, biased inertial cues. CONCLUSION Results suggest that biased inertial information has little influence on curve navigation performance when visual information is available. APPLICATION Though inertial cues may be important for open-loop steering, when visual cues are unavailable, their role in closed-loop steering seems less influential. This has implications for driving simulation and suggests that inertial discrepancies due to limitations in motion-cuing capabilities may not be all that problematic for the simulation of closed-loop curve steering tasks.
Collapse
|
20
|
Dunn MJ, Rushton SK. Lateral visual occlusion does not change walking trajectories. J Vis 2018; 18:11. [PMID: 30208430 PMCID: PMC6141229 DOI: 10.1167/18.9.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Difficulties with walking are often reported following brain damage that causes a lateralized loss of awareness on one side. Whether lateralized loss of awareness has a direct causal impact on walking is unknown. A review of the literature on visually guided walking suggests several reasons why a lateralized loss of visual awareness might be expected to lead to difficulties walking. Here, we isolated and examined the effect of lateralized vision loss on walking behavior in real and virtual environments. Healthy young participants walked to a target placed within a real room, in a virtual corridor, or on a virtual ground plane. In the ground-plane condition, the scene either was empty or contained three obstacles. We reduced vision on one side by occluding one eye (Experiment 1 and 2) or removing one hemifield, defined relative to either the head or trunk (Experiment 2), through use of eye patching (Experiment 1) and a virtual-reality system (Experiment 2). Visual-field restrictions did not induce significant deviations in walking paths in any of the occlusion conditions or any of the environments. The results provide further insight into the visual information that guides walking in humans, and suggest that lateralized vision loss on its own is not the primary cause of walking difficulties.
Collapse
Affiliation(s)
- Matt J Dunn
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | | |
Collapse
|
21
|
Qiu C, Jung JH, Tuccar-Burak M, Spano L, Goldstein R, Peli E. Measuring Pedestrian Collision Detection With Peripheral Field Loss and the Impact of Peripheral Prisms. Transl Vis Sci Technol 2018; 7:1. [PMID: 30197833 PMCID: PMC6126965 DOI: 10.1167/tvst.7.5.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Accepted: 06/26/2018] [Indexed: 11/24/2022] Open
Abstract
Purpose Peripheral field loss (PFL) due to retinitis pigmentosa, choroideremia, or glaucoma often results in a highly constricted residual central field, which makes it difficult for patients to avoid collision with approaching pedestrians. We developed a virtual environment to evaluate the ability of patients to detect pedestrians and judge potential collisions. We validated the system with both PFL patients and normally sighted subjects with simulated PFL. We also tested whether properly placed high-power prisms may improve pedestrian detection. Methods A virtual park-like open space was rendered using a driving simulator (configured for walking speeds), and pedestrians in testing scenarios appeared within and outside the residual central field. Nine normally sighted subjects and eight PFL patients performed the pedestrian detection and collision judgment tasks. The performance of the subjects with simulated PFL was further evaluated with field of view expanding prisms. Results The virtual system for testing pedestrian detection and collision judgment was validated. The performance of PFL patients and normally sighted subjects with simulated PFL were similar. The prisms for simulated PFL improved detection rates, reduced detection response times, and supported reasonable collision judgments in the prism-expanded field; detections and collision judgments in the residual central field were not influenced negatively by the prisms. Conclusions The scenarios in a virtual environment are suitable for evaluating PFL and the impact of field of view expanding devices. Translational Relevance This study validated an objective means to evaluate field expansion devices in reproducible near-real-life settings.
Collapse
Affiliation(s)
- Cheng Qiu
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Jae-Hyun Jung
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Merve Tuccar-Burak
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Lauren Spano
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Robert Goldstein
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
22
|
When flow is not enough: evidence from a lane changing task. PSYCHOLOGICAL RESEARCH 2018; 84:834-849. [PMID: 30088078 DOI: 10.1007/s00426-018-1070-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/31/2018] [Indexed: 10/28/2022]
Abstract
Humans are able to estimate their heading on the basis of optic flow information and it has been argued that we use flow in this way to guide navigation. Consistent with this idea, several studies have reported good navigation performance in flow fields. However, one criticism of these studies is that they have generally focused on the task of walking or steering towards a target, offering an additional, salient directional cue. Hence, it remains a matter of debate as to whether humans are truly able to control steering in the presence of optic flow alone. In this study, we report a set of maneuvers carried out in flow fields in the absence of a physical target. To do this, we studied the everyday task of lane changing, a commonplace multiphase steering maneuver which can be conceptualized without the need for a target. What is more (and here is the crucial quirk), previous literature has found that in the absence of visual feedback, drivers show a systematic, asymmetric steering response, resulting in a systematic final heading error. If optic flow is sufficient for controlling navigation through our environment, we would expect this asymmetry to disappear whenever optic flow is provided. However, our results show that this asymmetry persisted, even in the presence of a flow field, implying that drivers are unable to use flow to guide normal steering responses in this task.
Collapse
|
23
|
Abstract
The ability to navigate through crowds of moving people accurately, efficiently, and without causing collisions is essential for our day-to-day lives. Vision provides key information about one's own self-motion as well as the motions of other people in the crowd. These two types of information (optic flow and biological motion) have each been investigated extensively; however, surprisingly little research has been dedicated to investigating how they are processed when presented concurrently. Here, we showed that patterns of biological motion have a negative impact on visual-heading estimation when people within the crowd move their limbs but do not move through the scene. Conversely, limb motion facilitates heading estimation when walkers move independently through the scene. Interestingly, this facilitation occurs for crowds containing both regular and perturbed depictions of humans, suggesting that it is likely caused by low-level motion cues inherent in the biological motion of other people.
Collapse
Affiliation(s)
- Hugh Riddell
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| | - Markus Lappe
- Department of Psychology, Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster
| |
Collapse
|
24
|
Rushton SK, Chen R, Li L. Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. J Vis 2018; 18:11. [PMID: 30029224 DOI: 10.1167/18.6.11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-movement.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| | - Rongrong Chen
- Department of Psychology, The University of Hong Kong, Hong Kong SAR
| | - Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong SAR.,Neural Science Program, NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
| |
Collapse
|
25
|
Post-stroke unilateral spatial neglect: virtual reality-based navigation and detection tasks reveal lateralized and non-lateralized deficits in tasks of varying perceptual and cognitive demands. J Neuroeng Rehabil 2018; 15:34. [PMID: 29685145 PMCID: PMC5913876 DOI: 10.1186/s12984-018-0374-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Accepted: 03/27/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke impairment, has been shown to affect the recovery of locomotor and navigation skills needed for community mobility. We recently found that USN alters goal-directed locomotion in conditions of different cognitive/perceptual demands. However, sensorimotor post-stroke dysfunction (e.g. decreased walking speed) could have influenced the results. Analogous to a previously used goal-directed locomotor paradigm, a seated, joystick-driven navigation experiment, minimizing locomotor demands, was employed in individuals with and without post-stroke USN (USN+ and USN-, respectively) and healthy controls (HC). METHODS Participants (n = 15 per group) performed a seated, joystick-driven navigation and detection time task to targets 7 m away at 0°, ±15°/30° in actual (visually-guided), remembered (memory-guided) and shifting (visually-guided with representational updating component) conditions while immersed in a 3D virtual reality environment. RESULTS Greater end-point mediolateral errors to left-sided targets (remembered and shifting conditions) and overall lengthier onsets in reorientation strategy (shifting condition) were found for USN+ vs. USN- and vs. HC (p < 0.05). USN+ individuals mostly overshot left targets (- 15°/- 30°). Greater delays in detection time for target locations across the visual spectrum (left, middle and right) were found in USN+ vs. USN- and HC groups (p < 0.05). CONCLUSION USN-related attentional-perceptual deficits alter navigation abilities in memory-guided and shifting conditions, independently of post-stroke locomotor deficits. Lateralized and non-lateralized deficits in object detection are found. The employed paradigm could be considered in the design and development of sensitive and functional assessment methods for neglect; thereby addressing the drawbacks of currently used traditional paper-and-pencil tools.
Collapse
|
26
|
Ogourtsova T, Archambault P, Sangani S, Lamontagne A. Ecological Virtual Reality Evaluation of Neglect Symptoms (EVENS): Effects of Virtual Scene Complexity in the Assessment of Poststroke Unilateral Spatial Neglect. Neurorehabil Neural Repair 2018; 32:46-61. [DOI: 10.1177/1545968317751677] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background. Unilateral spatial neglect (USN) is a highly prevalent and disabling poststroke impairment. USN is traditionally assessed with paper-and-pencil tests that lack ecological validity, generalization to real-life situations and are easily compensated for in chronic stages. Virtual reality (VR) can, however, counteract these limitations. Objective. We aimed to examine the feasibility of a novel assessment of USN symptoms in a functional shopping activity, the Ecological VR-based Evaluation of Neglect Symptoms (EVENS). Methods. EVENS is immersive and consists of simple and complex 3-dimensional scenes depicting grocery shopping shelves, where joystick-based object detection and navigation tasks are performed while seated. Effects of virtual scene complexity on navigational and detection abilities in patients with (USN+, n = 12) and without (USN−, n = 15) USN following a right hemisphere stroke and in age-matched healthy controls (HC, n = 9) were determined. Results. Longer detection times, larger mediolateral deviations from ideal paths and longer navigation times were found in USN+ versus USN− and HC groups, particularly in the complex scene. EVENS detected lateralized and nonlateralized USN-related deficits, performance alterations that were dependent or independent of USN severity, and performance alterations in 3 USN− subjects versus HC. Conclusion. EVENS’ environmental changing complexity, along with the functional tasks of far space detection and navigation can potentially be clinically relevant and warrant further empirical investigation. Findings are discussed in terms of attentional models, lateralized versus nonlateralized deficits in USN, and tasks-specific mechanisms.
Collapse
Affiliation(s)
- Tatiana Ogourtsova
- McGill University, Montreal, Quebec, Canada
- Jewish Rehabilitation Hospital, Laval, Quebec, Canada
| | - Philippe Archambault
- McGill University, Montreal, Quebec, Canada
- Jewish Rehabilitation Hospital, Laval, Quebec, Canada
| | - Samir Sangani
- Jewish Rehabilitation Hospital, Laval, Quebec, Canada
| | - Anouk Lamontagne
- McGill University, Montreal, Quebec, Canada
- Jewish Rehabilitation Hospital, Laval, Quebec, Canada
| |
Collapse
|
27
|
The Primary Role of Flow Processing in the Identification of Scene-Relative Object Movement. J Neurosci 2017; 38:1737-1743. [PMID: 29229707 PMCID: PMC5815455 DOI: 10.1523/jneurosci.3530-16.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 08/15/2017] [Accepted: 09/07/2017] [Indexed: 11/25/2022] Open
Abstract
Retinal image motion could be due to the movement of the observer through space or an object relative to the scene. Optic flow, form, and change of position cues all provide information that could be used to separate out retinal motion due to object movement from retinal motion due to observer movement. In Experiment 1, we used a minimal display to examine the contribution of optic flow and form cues. Human participants indicated the direction of movement of a probe object presented against a background of radially moving pairs of dots. By independently controlling the orientation of each dot pair, we were able to put flow cues to self-movement direction (the point from which all the motion radiated) and form cues to self-movement direction (the point toward which all the dot pairs were oriented) in conflict. We found that only flow cues influenced perceived probe movement. In Experiment 2, we switched to a rich stereo display composed of 3D objects to examine the contribution of flow and position cues. We moved the scene objects to simulate a lateral translation and counter-rotation of gaze. By changing the polarity of the scene objects (from light to dark and vice versa) between frames, we placed flow cues to self-movement direction in opposition to change of position cues. We found that again flow cues dominated the perceived probe movement relative to the scene. Together, these experiments indicate the neural network that processes optic flow has a primary role in the identification of scene-relative object movement. SIGNIFICANCE STATEMENT Motion of an object in the retinal image indicates relative movement between the observer and the object, but it does not indicate its cause: movement of an object in the scene; movement of the observer; or both. To isolate retinal motion due to movement of a scene object, the brain must parse out the retinal motion due to movement of the eye (“flow parsing”). Optic flow, form, and position cues all have potential roles in this process. We pitted the cues against each other and assessed their influence. We found that flow parsing relies on optic flow alone. These results indicate the primary role of the neural network that processes optic flow in the identification of scene-relative object movement.
Collapse
|
28
|
Shirai N, Imura T. Infant-specific gaze patterns in response to radial optic flow. Sci Rep 2016; 6:34734. [PMID: 27708361 PMCID: PMC5052525 DOI: 10.1038/srep34734] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Accepted: 09/20/2016] [Indexed: 11/25/2022] Open
Abstract
The focus of a radial optic flow is a valid visual cue used to perceive and control the heading direction of animals. Gaze patterns in response to the focus of radial optic flow were measured in human infants (N = 100, 4–18 months) and in adults (N = 20) using an eye-tracking technique. Overall, although the adults showed an advantage in detecting the focus of an expansion flow (representing forward locomotion) against that of a contraction flow (representing backward locomotion), infants younger than 1 year showed an advantage in detecting the focus of a contraction flow. Infants aged between 13 and 18 months showed no significant advantage in detecting the focus in either the expansion or in the contraction flow. The uniqueness of the gaze patterns in response to the focus of radial optic flow in infants shows that the visual information necessary to perceive heading direction potentially differs between younger and mature individuals.
Collapse
Affiliation(s)
- Nobu Shirai
- Department of Psychology, Faculty of Humanities, Niigata University 2-8050 Ikarashi Nishi-Ku Niigata, 950-2181, Japan
| | - Tomoko Imura
- Department of Information Systems, Faculty of Information Culture, Niigata University of International and Information Studies, 3-1-1, Mizukino, Nishi-ku, Niigata, 950-2292, Japan
| |
Collapse
|
29
|
Abstract
When steering down a winding road, drivers have been shown to use both near and far regions of the road for guidance during steering. We propose a model of steering that explicitly embodies this idea, using both a ‘near point’ to maintain a central lane position and a ‘far point’ to account for the upcoming roadway. Unlike control models that integrate near and far information to compute curvature or more complex features, our model relies solely on one perceptually plausible feature of the near and far points, namely the visual direction to each point. The resulting parsimonious model can be run in simulation within a realistic highway environment to facilitate direct comparison between model and human behavior. Using such simulations, we demonstrate that the proposed two-point model is able to account for four interesting aspects of steering behavior: curve negotiation with occluded visual regions, corrective steering after a lateral drift, lane changing, and individual differences.
Collapse
Affiliation(s)
- Dario D Salvucci
- Department of Computer Science, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
30
|
Wu J, He ZJ, Ooi TL. Visually Perceived Eye Level and Horizontal Midline of the Body Trunk Influenced by Optic Flow. Perception 2016; 34:1045-60. [PMID: 16245484 DOI: 10.1068/p5416] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The eye level and the horizontal midline of the body trunk can serve, respectively as references for judging the vertical and horizontal egocentric directions. We investigated whether the optic-flow pattern, which is the dynamic motion information generated when one moves in the visual world, can be used by the visual system to determine and calibrate these two references. Using a virtual-reality setup to generate the optic-flow pattern, we showed that judged elevation of the eye level and the azimuth of the horizontal midline of the body trunk are biased toward the positional placement of the focus of expansion (FOE) of the optic-flow pattern. Furthermore, for the vertical reference, prolonged viewing of an optic-flow pattern with lowered FOE not only causes a lowered judged eye level after removal of the optic-flow pattern, but also an overestimation of distance in the dark. This is equivalent to a reduction in the judged angular declination of the object after adaptation, indicating that the optic-flow information also plays a role in calibrating the extraretinal signals used to establish the vertical reference.
Collapse
Affiliation(s)
- Jun Wu
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA
| | | | | |
Collapse
|
31
|
Fajen BR. Perceiving Possibilities for Action: On the Necessity of Calibration and Perceptual Learning for the Visual Guidance of Action. Perception 2016; 34:717-40. [PMID: 16042193 DOI: 10.1068/p5405] [Citation(s) in RCA: 107] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Carnegie Building 308, 110 Eighth Street, Troy, NY 12180-3590, USA.
| |
Collapse
|
32
|
Hibbard P. Reviews: Motion Vision: Computational, Neural and Ecological Constraints. Perception 2016. [DOI: 10.1068/p3010rvw] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Paul Hibbard
- University of St Andrews, St Andrews, Fife KY16 9JU, Scotland, UK
| |
Collapse
|
33
|
Children's Brain Responses to Optic Flow Vary by Pattern Type and Motion Speed. PLoS One 2016; 11:e0157911. [PMID: 27326860 PMCID: PMC4915671 DOI: 10.1371/journal.pone.0157911] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Accepted: 06/07/2016] [Indexed: 01/20/2023] Open
Abstract
Structured patterns of global visual motion called optic flow provide crucial information about an observer's speed and direction of self-motion and about the geometry of the environment. Brain and behavioral responses to optic flow undergo considerable postnatal maturation, but relatively little brain imaging evidence describes the time course of development in motion processing systems in early to middle childhood, a time when psychophysical data suggest that there are changes in sensitivity. To fill this gap, electroencephalographic (EEG) responses were recorded in 4- to 8-year-old children who viewed three time-varying optic flow patterns (translation, rotation, and radial expansion/contraction) at three different speeds (2, 4, and 8 deg/s). Modulations of global motion coherence evoked coherent EEG responses at the first harmonic that differed by flow pattern and responses at the third harmonic and dot update rate that varied by speed. Pattern-related responses clustered over right lateral channels while speed-related responses clustered over midline channels. Both children and adults show widespread responses to modulations of motion coherence at the second harmonic that are not selective for pattern or speed. The results suggest that the developing brain segregates the processing of optic flow pattern from speed and that an adult-like pattern of neural responses to optic flow has begun to emerge by early to middle childhood.
Collapse
|
34
|
Beyeler M, Oros N, Dutt N, Krichmar JL. A GPU-accelerated cortical neural network model for visually guided robot navigation. Neural Netw 2015; 72:75-87. [PMID: 26494281 DOI: 10.1016/j.neunet.2015.09.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Revised: 07/17/2015] [Accepted: 09/22/2015] [Indexed: 11/27/2022]
|
35
|
Mackenzie AK, Harris JM. Eye movements and hazard perception in active and passive driving. VISUAL COGNITION 2015; 23:736-757. [PMID: 26681913 PMCID: PMC4673545 DOI: 10.1080/13506285.2015.1079583] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Revised: 07/26/2015] [Accepted: 07/31/2015] [Indexed: 12/02/2022]
Abstract
Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.
Collapse
Affiliation(s)
- Andrew K Mackenzie
- School of Psychology & Neuroscience, University of St Andrews , St Andrews , UK
| | - Julie M Harris
- School of Psychology & Neuroscience, University of St Andrews , St Andrews , UK
| |
Collapse
|
36
|
Issen L, Huxlin KR, Knill D. Spatial integration of optic flow information in direction of heading judgments. J Vis 2015; 15:14. [PMID: 26024461 DOI: 10.1167/15.6.14] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world.
Collapse
|
37
|
Okafuji Y, Fukao T, Inou H. Development of Automatic Steering System by Modeling Human Behavior Based on Optical Flow. JOURNAL OF ROBOTICS AND MECHATRONICS 2015. [DOI: 10.20965/jrm.2015.p0136] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270002/03.jpg"" width=""300"" /> Manipulated optical flow field</div> Recently, various driving support systems have been developed to improve safety. However, because drivers occasionally feel that something is wrong, systems need to be designed based on information that drivers perceive. Therefore, we focused on optical flow, which is one of the visual information used by humans to improve driving feel. Humans are said to perceive the direction of self-motion from optical flow and also utilize it during driving. Applying the optical flow model to automatic steering systems, a human-oriented system might be able to be developed. In this paper, we derive the focus of expansion (FOE) in the frame of a camera that is the direction of self-motion in optical flow and propose a nonlinear control method based on the FOE. The effectiveness of the proposed method was verified through a vehicle simulation, and the results showed that the proposed method simulates human behavior. Based on these results, this approach may serve as a foundation of human-oriented system designs. </span>
Collapse
|
38
|
Zhao H, Warren WH. On-line and model-based approaches to the visual control of action. Vision Res 2014; 110:190-202. [PMID: 25454700 DOI: 10.1016/j.visres.2014.10.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 10/08/2014] [Accepted: 10/09/2014] [Indexed: 10/24/2022]
Abstract
Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| | - William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| |
Collapse
|
39
|
Murray NG, Ponce de Leon M, Ambati VNP, Saucedo F, Kennedy E, Reed-Jones RJ. Simulated visual field loss does not alter turning coordination in healthy young adults. J Mot Behav 2014; 46:423-31. [PMID: 25204364 DOI: 10.1080/00222895.2014.931272] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Turning, while walking, is an important component of adaptive locomotion. Current hypotheses regarding the motor control of body segment coordination during turning suggest heavy influence of visual information. The authors aimed to examine whether visual field impairment (central loss or peripheral loss) affects body segment coordination during walking turns in healthy young adults. No significant differences in the onset time of segments or intersegment coordination were observed because of visual field occlusion. These results suggest that healthy young adults can use visual information obtained from central and peripheral visual fields interchangeably, pointing to flexibility of visuomotor control in healthy young adults. Further study in populations with chronic visual impairment and those with turning difficulties are warranted.
Collapse
Affiliation(s)
- Nicholas G Murray
- a Interdisciplinary Health Sciences, College of Health Sciences , The University of Texas at El Paso
| | | | | | | | | | | |
Collapse
|
40
|
Vaina LM, Buonanno F, Rushton SK. Spared ability to perceive direction of locomotor heading and scene-relative object movement despite inability to perceive relative motion. Med Sci Monit 2014; 20:1563-71. [PMID: 25183375 PMCID: PMC4161606 DOI: 10.12659/msm.892199] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. MATERIAL AND METHODS We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. RESULTS Patients' performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR's performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. CONCLUSIONS This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation.
Collapse
Affiliation(s)
- Lucia Maria Vaina
- Brain and Vision Research Laboratory, Boston University, Boston, USA
| | - Ferdinando Buonanno
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Neurology of Vision Laboratory, Boston, USA
| | - Simon K Rushton
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
41
|
Campos JL, Butler JS, Bülthoff HH. Contributions of visual and proprioceptive information to travelled distance estimation during changing sensory congruencies. Exp Brain Res 2014; 232:3277-89. [PMID: 24961739 DOI: 10.1007/s00221-014-4011-0] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2013] [Accepted: 05/31/2014] [Indexed: 10/25/2022]
Abstract
Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.
Collapse
Affiliation(s)
- Jennifer L Campos
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076, Tübingen, Germany,
| | | | | |
Collapse
|
42
|
Li L, Niehorster DC. Influence of optic flow on the control of heading and target egocentric direction during steering toward a goal. J Neurophysiol 2014; 112:766-77. [PMID: 25128559 DOI: 10.1152/jn.00697.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Although previous studies have shown that people use both optic flow and target egocentric direction to walk or steer toward a goal, it remains in question how enriching the optic flow field affects the control of heading specified by optic flow and the control of target egocentric direction during goal-oriented locomotion. In the current study, we used a control-theoretic approach to separate the control response specific to these two cues in the visual control of steering toward a goal. The results showed that the addition of optic flow information (such as foreground motion and global flow) in the display improved the overall control precision, the amplitude, and the response delay of the control of heading. The amplitude and the response delay of the control of target egocentric direction were, however, not affected. The improvement in the control of heading with enriched optic flow displays was mirrored by an increase in the accuracy of heading perception. The findings provide direct support for the claim that people use the heading specified by optic flow as well as target egocentric direction to walk or steer toward a goal and suggest that the visual system does not internally weigh these two cues for goal-oriented locomotion control.
Collapse
Affiliation(s)
- Li Li
- Department of Psychology, The University of Hong Kong, Hong Kong, Special Administrative Region of the People's Republic of China
| | - Diederick C Niehorster
- Department of Psychology, The University of Hong Kong, Hong Kong, Special Administrative Region of the People's Republic of China
| |
Collapse
|
43
|
Saunders JA. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking. J Vis 2014; 14:24. [PMID: 24648194 DOI: 10.1167/14.3.24] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°-2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%-34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures.
Collapse
Affiliation(s)
- Jeffrey A Saunders
- Department of Psychology, University of Hong Kong, Hong Kong, Hong Kong SAR
| |
Collapse
|
44
|
A unified model of heading and path perception in primate MSTd. PLoS Comput Biol 2014; 10:e1003476. [PMID: 24586130 PMCID: PMC3930491 DOI: 10.1371/journal.pcbi.1003476] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Accepted: 01/03/2014] [Indexed: 11/20/2022] Open
Abstract
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow.
Collapse
|
45
|
Blind(fold)ed by science: a constant target-heading angle is used in visual and nonvisual pursuit. Psychon Bull Rev 2013; 20:923-34. [PMID: 23440726 DOI: 10.3758/s13423-013-0412-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous work investigating the strategies that observers use to intercept moving targets has shown that observers maintain a constant target-heading angle (CTHA) to achieve interception. Most of this work has concluded or indirectly assumed that vision is necessary to do this. We investigated whether blindfolded pursuers chasing a ball carrier holding a beeping football would utilize the same strategy that sighted observers use to chase a ball carrier. Results confirm that both blindfolded and sighted pursuers use a CTHA strategy in order to intercept targets, whether jogging or walking and irrespective of football experience and path and speed deviations of the ball carrier during the course of the pursuit. This work shows that the mechanisms involved in intercepting moving targets may be designed to use different sensory mechanisms in order to drive behavior that leads to the same end result. This has potential implications for the supramodal representation of motion perception in the human brain.
Collapse
|
46
|
The time course of estimating time-to-contact: switching between sources of information. Vision Res 2013; 92:53-8. [PMID: 24075899 DOI: 10.1016/j.visres.2013.09.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Revised: 09/09/2013] [Accepted: 09/16/2013] [Indexed: 11/20/2022]
Abstract
The different sources of information that can be used to estimate time-to-contact may have different degrees of reliability across time. For example, after a given presentation or display time, an absolute change of angular size can be more reliable than the corresponding estimation of the rate of angular expansion (e.g. motion information). One could then expect systematic biases in the observer's responses for different times of stimulus exposure. In one experiment, observers judged whether approaching objects arrived at the point of observation before or after a reference beep (1.2s) under monocular, and binocular plus monocular vision. Five display times from 0.1 to 0.9s were used. Unlike monocular viewing, where accuracy increased monotonically with display time, an interesting non-linearity occurred for objects with small size when binocular information was available. Accuracy reached maximum values for small objects with only 0.3s of vision with stereopsis. This accuracy, however, dropped significantly after 0.4s of exposure and increased again linearly with time. This is consistent with subjects switching from using binocular information to using monocular motion information when it started to become more reliable. We also explored whether monocular cues were combined differently across time by fitting a model that relates visual angle to its rate of expansion. Results show that subjects relied more on angular motion information (i.e. rate of expansion) with presentation time but interrupting this motion integration process led to a loss of accuracy in time-to-contact judgments.
Collapse
|
47
|
Fajen BR, Parade MS, Matthis JS. Humans perceive object motion in world coordinates during obstacle avoidance. J Vis 2013; 13:25. [PMID: 23887048 PMCID: PMC3726133 DOI: 10.1167/13.8.25] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A fundamental question about locomotion in the presence of moving objects is whether movements are guided based upon perceived object motion in an observer-centered or world-centered reference frame. The former captures object motion relative to the moving observer and depends on both observer and object motion. The latter captures object motion relative to the stationary environment and is independent of observer motion. Subjects walked through a virtual environment (VE) viewed through a head-mounted display and indicated whether they would pass in front of or behind a moving obstacle that was on course to cross their future path. Subjects' movement through the VE was manipulated such that object motion in observer coordinates was affected while object motion in world coordinates was the same. We found that when moving observers choose routes around moving obstacles, they rely on object motion perceived in world coordinates. This entails a process, which has been called flow parsing (Rushton & Warren, 2005; Warren & Rushton, 2009a), that recovers the component of optic flow due to object motion independent of self-motion. We found that when self-motion is real and actively generated, the process by which object motion is recovered relies on both visual and nonvisual information to factor out the influence of self-motion. The remaining component contains information about object motion in world coordinates that is needed to guide locomotion.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | | | | |
Collapse
|
48
|
Aburub AS, Lamontagne A. Altered steering strategies for goal-directed locomotion in stroke. J Neuroeng Rehabil 2013; 10:80. [PMID: 23875969 PMCID: PMC3733933 DOI: 10.1186/1743-0003-10-80] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Accepted: 06/14/2013] [Indexed: 11/23/2022] Open
Abstract
Background Individuals who have sustained a stroke can manifest altered locomotor steering behaviors when exposed to optic flows expanding from different locations. Whether these alterations persist in the presence of a visible goal and whether they can be explained by the presence of a perceptuo-motor disorder remain unknown. The purpose of this study was to compare stroke participants and healthy participants on their ability to control heading while exposed to changing optic flows and target locations. Methods Ten participants with stroke (55.6 ± 9.3 yrs) and ten healthy controls (57.0 ± 11.5 yrs) participated in a mouse-driven steering task (perceptuo-motor task) while seated and in a walking steering task. In the seated steering task, participants were instructed to head or ‘walk’ toward a target in the virtual environment by using a mouse while wearing a helmet-mounted display (HMD). In the walking task, participants performed a similar steering task in the same virtual environment while walking overground at their comfortable speed. For both experiments, the target and/or the focus of expansion (FOE) of the optic flow shifted to the side (±20°) or remained centered. The main outcome measure was net heading errors (NHE). Secondary outcomes included mediolateral displacement, horizontal head orientation, and onsets of heading and head reorientation. Results In the walking steering task, the presence of FOE shifts modulated the extent and timing of mediolateral displacement and head rotation changes, as well as NHE magnitudes. Participants overshot and undershot their net heading, respectively, in response to ipsilateral and contralateral FOE and target shifts. Stroke participants made larger NHEs, especially when the FOE was shifted towards the non-paretic side. In the seated steering task, similar NHEs were observed between stroke and healthy participants. Conclusions The findings highlight the fine coordination between rotational and translational steering mechanisms in presence of targets and FOE shifts. The altered performance of stroke participants in walking but not in the seated steering task suggests that an altered perceptuo-motor processing of optic flow is not a main contributing factor and that other stroke-related sensorimotor deficits are involved.
Collapse
|
49
|
Foulkes AJ, Rushton SK, Warren PA. Heading recovery from optic flow: comparing performance of humans and computational models. Front Behav Neurosci 2013; 7:53. [PMID: 23801946 PMCID: PMC3689323 DOI: 10.3389/fnbeh.2013.00053] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Accepted: 05/07/2013] [Indexed: 11/13/2022] Open
Abstract
Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame) and quality (amount of 2D directional noise) of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.
Collapse
Affiliation(s)
- Andrew J. Foulkes
- School of Psychological Sciences, The University of ManchesterManchester, UK
| | | | - Paul A. Warren
- School of Psychological Sciences, The University of ManchesterManchester, UK
| |
Collapse
|
50
|
Cirio G, Olivier AH, Marchal M, Pettré J. Kinematic evaluation of virtual walking trajectories. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2013; 19:671-680. [PMID: 23428452 DOI: 10.1109/tvcg.2013.34] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.
Collapse
|