1
|
Egger SW, Keemink SW, Goldman MS, Britten KH. Context-dependence of deterministic and nondeterministic contributions to closed-loop steering control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605325. [PMID: 39131368 PMCID: PMC11312469 DOI: 10.1101/2024.07.26.605325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
In natural circumstances, sensory systems operate in a closed loop with motor output, whereby actions shape subsequent sensory experiences. A prime example of this is the sensorimotor processing required to align one's direction of travel, or heading, with one's goal, a behavior we refer to as steering. In steering, motor outputs work to eliminate errors between the direction of heading and the goal, modifying subsequent errors in the process. The closed-loop nature of the behavior makes it challenging to determine how deterministic and nondeterministic processes contribute to behavior. We overcome this by applying a nonparametric, linear kernel-based analysis to behavioral data of monkeys steering through a virtual environment in two experimental contexts. In a given context, the results were consistent with previous work that described the transformation as a second-order linear system. Classically, the parameters of such second-order models are associated with physical properties of the limb such as viscosity and stiffness that are commonly assumed to be approximately constant. By contrast, we found that the fit kernels differed strongly across tasks in these and other parameters, suggesting context-dependent changes in neural and biomechanical processes. We additionally fit residuals to a simple noise model and found that the form of the noise was highly conserved across both contexts and animals. Strikingly, the fitted noise also closely matched that found previously in a human steering task. Altogether, this work presents a kernel-based analysis that characterizes the context-dependence of deterministic and non-deterministic components of a closed-loop sensorimotor task.
Collapse
Affiliation(s)
- Seth W Egger
- Center for Neuroscience, University of California, Davis
| | - Sander W Keemink
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| | - Mark S Goldman
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
- Department of Ophthalmology and Vision Science, University of California, Davis
| | - Kenneth H Britten
- Center for Neuroscience, University of California, Davis
- Department of Neurobiology, Physiology and Behavior, University of California, Davis
| |
Collapse
|
2
|
Wann JP. Processing of complex traffic scenes for effective steering and collision avoidance: a perspective, from research into human control, on the challenges for sensor-based autonomous vehicles on urban roads. Front Psychol 2024; 15:1347309. [PMID: 38505365 PMCID: PMC10948443 DOI: 10.3389/fpsyg.2024.1347309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 02/20/2024] [Indexed: 03/21/2024] Open
Abstract
An overview is provided of behavioral research into human steering and collision avoidance including the processing of optic flow, optical looming and the role of the human mobile gaze system. A consideration is then made of the issues that may occur for autonomous vehicles (AV) when they move from grid-type road networks into complex inner-city streets and interact with human drivers, pedestrians and cyclists. Comparisons between human processing and AV processing of these interactions are made. This raises issues as to whether AV control systems need to mimic human visual processing more closely and highlights the need for AV systems to develop a "theory of road users" that allows attribution of intent to other drivers, cyclists or pedestrians. Guidelines for the development of a "theory of road users" for AVs are suggested.
Collapse
Affiliation(s)
- John P. Wann
- Royal Holloway, University of London, Egham, United Kingdom
| |
Collapse
|
3
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
4
|
Macuga KL, Beall AC, Smith RS, Loomis JM. Visual control of steering in curve driving. J Vis 2020; 19:1. [PMID: 31042254 DOI: 10.1167/19.5.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This pair of studies investigated steering in the absence of continuous visual information. In a driving simulator, participants steered a curving path that was displayed either continuously or intermittently. Optic flow conditions were manipulated to alter the nature of the heading information with respect to the path being steered. Removing or biasing heading information had little effect on steering even during long and frequent path occlusions as long as turn rate was available. This demonstrates that participants can use intermittent views of the path to plan their steering actions and optic flow to accurately update vehicle turns with respect to that path.
Collapse
Affiliation(s)
- Kristen L Macuga
- School of Psychological Science, Oregon State University, Corvallis, OR, USA
| | - Andrew C Beall
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Roy S Smith
- Department of Information Technology and Electrical Engineering, Swiss Federal Institute of Technology, Zürich, Switzerland
| | - Jack M Loomis
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
5
|
Zhao H, Straub D, Rothkopf CA. The visual control of interceptive steering: How do people steer a car to intercept a moving target? J Vis 2019; 19:11. [PMID: 31830240 DOI: 10.1167/19.14.11] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The visually guided interception of a moving target is a fundamental visuomotor task that humans can do with ease. But how humans carry out this task is still unclear despite numerous empirical investigations. Measurements of angular variables during human interception have suggested three possible strategies: the pursuit strategy, the constant bearing angle strategy, and the constant target-heading strategy. Here, we review previous experimental paradigms and show that some of them do not allow one to distinguish among the three strategies. Based on this analysis, we devised a virtual driving task that allows investigating which of the three strategies best describes human interception. Crucially, we measured participants' steering, head, and gaze directions over time for three different target velocities. Subjects initially aligned head and gaze in the direction of the car's heading. When the target appeared, subjects centered their gaze on the target, pointed their head slightly off the heading direction toward the target, and maintained an approximately constant target-heading angle, whose magnitude varied across participants, while the target's bearing angle continuously changed. With a second condition, in which the target was partially occluded, we investigated several alternative hypotheses about participants' visual strategies. Overall, the results suggest that interceptive steering is best described by the constant target-heading strategy and that gaze and head are coordinated to continuously acquire visual information to achieve successful interception.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Dominik Straub
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany.,Center for Cognitive Science, Technical University Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Germany
| |
Collapse
|
6
|
Mole CD, Lappi O, Giles O, Markkula G, Mars F, Wilkie RM. Getting Back Into the Loop: The Perceptual-Motor Determinants of Successful Transitions out of Automated Driving. HUMAN FACTORS 2019; 61:1037-1065. [PMID: 30840514 DOI: 10.1177/0018720819829594] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To present a structured, narrative review highlighting research into human perceptual-motor coordination that can be applied to automated vehicle (AV)-human transitions. BACKGROUND Manual control of vehicles is made possible by the coordination of perceptual-motor behaviors (gaze and steering actions), where active feedback loops enable drivers to respond rapidly to ever-changing environments. AVs will change the nature of driving to periods of monitoring followed by the human driver taking over manual control. The impact of this change is currently poorly understood. METHOD We outline an explanatory framework for understanding control transitions based on models of human steering control. This framework can be summarized as a perceptual-motor loop that requires (a) calibration and (b) gaze and steering coordination. A review of the current experimental literature on transitions is presented in the light of this framework. RESULTS The success of transitions are often measured using reaction times, however, the perceptual-motor mechanisms underpinning steering quality remain relatively unexplored. CONCLUSION Modeling the coordination of gaze and steering and the calibration of perceptual-motor control will be crucial to ensure safe and successful transitions out of automated driving. APPLICATION This conclusion poses a challenge for future research on AV-human transitions. Future studies need to provide an understanding of human behavior that will be sufficient to capture the essential characteristics of drivers reengaging control of their vehicle. The proposed framework can provide a guide for investigating specific components of human control of steering and potential routes to improving manual control recovery.
Collapse
Affiliation(s)
| | - Otto Lappi
- Cognitive Science, University of Helsinki, Finland
| | | | | | | | | |
Collapse
|
7
|
Retinal Stabilization Reveals Limited Influence of Extraretinal Signals on Heading Tuning in the Medial Superior Temporal Area. J Neurosci 2019; 39:8064-8078. [PMID: 31488610 DOI: 10.1523/jneurosci.0388-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Revised: 08/17/2019] [Accepted: 08/20/2019] [Indexed: 11/21/2022] Open
Abstract
Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.
Collapse
|
8
|
Mole CD, Jersakova R, Kountouriotis GK, Moulin CJ, Wilkie RM. Metacognitive judgements of perceptual-motor steering performance. Q J Exp Psychol (Hove) 2018; 71:2223-2234. [PMID: 30226435 DOI: 10.1177/1747021817737496] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Control of skilled actions requires rapid information sampling and processing, which may largely be carried out subconsciously. However, individuals often need to make conscious strategic decisions that ideally would be based upon accurate knowledge of performance. Here, we determined the extent to which individuals have explicit awareness of their steering performance (conceptualised as "metacognition"). Participants steered in a virtual environment along a bending road while attempting to keep within a central demarcated target zone. Task demands were altered by manipulating locomotor speed (fast/slow) and the target zone (narrow/wide). All participants received continuous visual feedback about position in zone, and one sub-group was given additional auditory warnings when exiting/entering the zone. At the end of each trial, participants made a metacognitive evaluation: the proportion of the trial they believed was spent in the zone. Overall, although evaluations broadly shifted in line with task demands, participants showed limited calibration to performance. Regression analysis showed that evaluations were influenced by two components: (a) direct monitoring of performance and (b) indirect task heuristics estimating performance based on salient cues (e.g., speed). Evaluations often weighted indirect task heuristics inappropriately, but the additional auditory feedback improved evaluations seemingly by reducing this weighting. These results have important implications for all motor tasks where conscious cognitive control can be used to influence action selection.
Collapse
Affiliation(s)
- Callum D Mole
- 1 School of Psychology, University of Leeds, Leeds, UK
| | | | | | - Chris Ja Moulin
- 3 Laboratoire de Psychologie et Neurocognition (CNRS 5105), Université Grenoble Alpes, Grenoble, France
| | | |
Collapse
|
9
|
Okafuji Y, Mole CD, Merat N, Fukao T, Yokokohji Y, Inou H, Wilkie RM. Steering bends and changing lanes: The impact of optic flow and road edges on two point steering control. J Vis 2018; 18:14. [PMID: 30242386 DOI: 10.1167/18.9.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Successful driving involves steering corrections that respond to immediate positional errors while also anticipating upcoming changes to the road layout ahead. In popular steering models these tasks are often treated as separate functions using two points: the near region for correcting current errors, and the far region for anticipating future steering requirements. Whereas two-point control models can capture many aspects of driver behavior, the nature of perceptual inputs to these two "points" remains unclear. Inspired by experiments that solely focused on road-edge information (Land & Horwood, 1995), two-point models have tended to ignore the role of optic flow during steering control. There is recent evidence demonstrating that optic flow should be considered within two-point control steering models (Mole, Kountouriotis, Billington, & Wilkie, 2016). To examine the impact of optic flow and road edges on two-point steering control we used a driving simulator to selectively and systematically manipulate these components. We removed flow and/or road-edge information from near or far regions of the scene, and examined how behaviors changed when steering along roads where the utility of far-road information varied. While steering behaviors were strongly influenced by the road-edges, there were also clear contributions of optic flow to steering responses. The patterns of steering were not consistent with optic flow simply feeding into two-point control; rather, the global optic flow field appeared to support effective steering responses across the time-course of each trajectory.
Collapse
Affiliation(s)
- Yuki Okafuji
- School of Psychology, University of Leeds, Leeds, UK.,Institute for Transport Studies, University of Leeds, Leeds, UK.,Department of Electrical and Electronic Engineering, Ritsumeikan University, Kusatsu-shi, Japan.,Department of Mechanical Engineering, Kobe University, Kobe-shi, Japan
| | | | - Natasha Merat
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Takanori Fukao
- Department of Electrical and Electronic Engineering, Ritsumeikan University, Kusatsu-shi, Japan
| | | | - Hiroshi Inou
- DENSO International America, Inc., Southfield, MI, USA
| | | |
Collapse
|
10
|
When flow is not enough: evidence from a lane changing task. PSYCHOLOGICAL RESEARCH 2018; 84:834-849. [PMID: 30088078 DOI: 10.1007/s00426-018-1070-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/31/2018] [Indexed: 10/28/2022]
Abstract
Humans are able to estimate their heading on the basis of optic flow information and it has been argued that we use flow in this way to guide navigation. Consistent with this idea, several studies have reported good navigation performance in flow fields. However, one criticism of these studies is that they have generally focused on the task of walking or steering towards a target, offering an additional, salient directional cue. Hence, it remains a matter of debate as to whether humans are truly able to control steering in the presence of optic flow alone. In this study, we report a set of maneuvers carried out in flow fields in the absence of a physical target. To do this, we studied the everyday task of lane changing, a commonplace multiphase steering maneuver which can be conceptualized without the need for a target. What is more (and here is the crucial quirk), previous literature has found that in the absence of visual feedback, drivers show a systematic, asymmetric steering response, resulting in a systematic final heading error. If optic flow is sufficient for controlling navigation through our environment, we would expect this asymmetry to disappear whenever optic flow is provided. However, our results show that this asymmetry persisted, even in the presence of a flow field, implying that drivers are unable to use flow to guide normal steering responses in this task.
Collapse
|
11
|
Eye position affects flight altitude in visual approach to landing independent of level of expertise of pilot. PLoS One 2018; 13:e0197585. [PMID: 29795618 PMCID: PMC5967751 DOI: 10.1371/journal.pone.0197585] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Accepted: 05/06/2018] [Indexed: 11/19/2022] Open
Abstract
The present study addresses the effect of the eye position in the cockpit on the flight altitude during the final approach to landing. Three groups of participants with different levels of expertise (novices, trainees, and certified pilots) were given a laptop with a flight simulator and they were asked to maintain a 3.71° glide slope while landing. Each participant performed 40 approaches to the runway. During 8 of the approaches, the point of view that the flight simulator used to compute the visual scene was slowly raised or lowered with 4 cm with respect to the cockpit, hence moving the projection of the visible part of the cockpit down or up in the visible scene in a hardly noticeable manner. The increases and decreases in the simulated eye height led to increases and decreases in the altitude of the approach trajectories, for all three groups of participants. On the basis of these results, it is argued that the eye position of pilots during visual approaches is a factor that contributes to the risk of black hole accidents.
Collapse
|
12
|
Kountouriotis GK, Mole CD, Merat N, Wilkie RM. The need for speed: global optic flow speed influences steering. ROYAL SOCIETY OPEN SCIENCE 2016; 3:160096. [PMID: 27293789 PMCID: PMC4892451 DOI: 10.1098/rsos.160096] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 04/05/2016] [Indexed: 06/06/2023]
Abstract
How do animals follow demarcated paths? Different species are sensitive to optic flow and one control solution is to maintain the balance of flow symmetry across visual fields; however, it is unclear whether animals are sensitive to changes in asymmetries when steering along curved paths. Flow asymmetries can alter the global properties of flow (i.e. flow speed) which may also influence steering control. We tested humans steering curved paths in a virtual environment. The scene was manipulated so that the ground plane to either side of the demarcated path produced larger or smaller asymmetries in optic flow. Independent of asymmetries and the locomotor speed, the scene properties were altered to produce either faster or slower globally averaged flow speeds. Results showed that rather than being influenced by changes in flow asymmetry, steering responded to global flow speed. We conclude that the human brain performs global averaging of flow speed from across the scene and uses this signal as an input for steering control. This finding is surprising since the demarcated path provided sufficient information to steer, whereas global flow speed (by itself) did not. To explain these findings, existing models of steering must be modified to include a new perceptual variable: namely global optic flow speed.
Collapse
Affiliation(s)
| | - Callum D. Mole
- School of Psychology, University of Leeds, Leeds LS2 9JT, UK
| | - Natasha Merat
- Institute for Transport Studies, University of Leeds, Leeds LS2 9JT, UK
| | | |
Collapse
|
13
|
Ramkhalawansingh R, Keshavarz B, Haycock B, Shahab S, Campos JL. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task. Front Psychol 2016; 7:595. [PMID: 27199829 PMCID: PMC4848465 DOI: 10.3389/fpsyg.2016.00595] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Accepted: 04/11/2016] [Indexed: 11/17/2022] Open
Abstract
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.
Collapse
Affiliation(s)
- Robert Ramkhalawansingh
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada
| | - Behrang Keshavarz
- Research/iDAPT, Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Bruce Haycock
- Research/iDAPT, Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Saba Shahab
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada; Institute of Medical Science, Faculty of Medicine, University of TorontoToronto, ON, Canada
| | - Jennifer L Campos
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada
| |
Collapse
|
14
|
Crane BT. Coordinates of Human Visual and Inertial Heading Perception. PLoS One 2015; 10:e0135539. [PMID: 26267865 PMCID: PMC4534459 DOI: 10.1371/journal.pone.0135539] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 07/22/2015] [Indexed: 11/22/2022] Open
Abstract
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Collapse
Affiliation(s)
- Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, NY, United States of America
- Department of Bioengineering, University of Rochester, Rochester, NY, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| |
Collapse
|
15
|
van Leeuwen PM, Gómez i Subils C, Jimenez AR, Happee R, de Winter JCF. Effects of visual fidelity on curve negotiation, gaze behaviour and simulator discomfort. ERGONOMICS 2015; 58:1347-1364. [PMID: 25693035 DOI: 10.1080/00140139.2015.1005172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED Technological developments have led to increased visual fidelity of driving simulators. However, simplified visuals have potential advantages, such as improved experimental control, reduced simulator discomfort and increased generalisability of results. In this driving simulator study, we evaluated the effects of visual fidelity on driving performance, gaze behaviour and subjective discomfort ratings. Twenty-four participants drove a track with 90° corners in (1) a high fidelity, textured environment, (2) a medium fidelity, non-textured environment without scenery objects and (3) a low-fidelity monochrome environment that only showed lane markers. The high fidelity level resulted in higher steering activity on straight road segments, higher driving speeds and higher gaze variance than the lower fidelity levels. No differences were found between the two lower fidelity levels. In conclusion, textures and objects were found to affect steering activity and driving performance; however, gaze behaviour during curve negotiation and self-reported simulator discomfort were unaffected. PRACTITIONER SUMMARY In a driving simulator study, three levels of visual fidelity were evaluated. The results indicate that the highest fidelity level, characterised by a textured environment, resulted in higher steering activity, higher driving speeds and higher variance of horizontal gaze than the two lower fidelity levels without textures.
Collapse
Affiliation(s)
- Peter M van Leeuwen
- a Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology , Mekelweg 2, 2628 CD, Delft , The Netherlands
| | | | | | | | | |
Collapse
|
16
|
Smith M, Mole CD, Kountouriotis GK, Chisholm C, Bhakta B, Wilkie RM. Driving with homonymous visual field loss: Does visual search performance predict hazard detection? Br J Occup Ther 2015. [DOI: 10.1177/0308022614562786] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Introduction Stroke often causes homonymous visual field loss, which can lead to exclusion from driving. Retention of a driving licence is sometimes possible by completing an on-road assessment, but this is not practical for all. It is important to find simple tests that can inform the assessment and rehabilitation of driving-related visual-motor function. Method We developed novel computerised assessments: visual search; simple reaction and decision reaction to appearing pedestrians; and pedestrian detection during simulated driving. We tested 12 patients with stroke (seven left, five right field loss) and 12 controls. Results The homonymous visual field defect group was split into adequately compensated or inadequately compensated groups based on visual search performance. The inadequately compensated group had problems with stimuli in their affected field: they tended to react more slowly than controls and in the driving task they failed to detect a number of pedestrians. In contrast, the adequately compensated group were better at detecting pedestrians, though reaction times were slightly slower than controls. Conclusion We suggest that our search task can predict, to a limited extent, whether a person with stroke compensates for visual field loss, and may potentially identify suitability for specific rehabilitation to promote return to driving.
Collapse
Affiliation(s)
- Matthew Smith
- Consultant, major trauma rehabilitation, Leeds Teaching Hospitals and University of Leeds, UK
| | | | | | | | | | | |
Collapse
|
17
|
Billington J, Wilkie RM, Wann JP. Obstacle avoidance and smooth trajectory control: neural areas highlighted during improved locomotor performance. Front Behav Neurosci 2013; 7:9. [PMID: 23423825 PMCID: PMC3575057 DOI: 10.3389/fnbeh.2013.00009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Accepted: 01/29/2013] [Indexed: 11/22/2022] Open
Abstract
Visual control of locomotion typically involves both detection of current egomotion as well as anticipation of impending changes in trajectory. To determine if there are distinct neural systems involved in these aspects of steering control we used a slalom paradigm, which required participants to steer around objects in a computer simulated environment using a joystick. In some trials the whole slalom layout was visible (steering “preview” trials) so planning of the trajectory around future waypoints was possible, whereas in other trials the slalom course was only revealed one object at a time (steering “near” trials) so that future planning was restricted. In order to control for any differences in the motor requirements and visual properties between “preview” and “near” trials, we also interleaved control trials which replayed a participants' previous steering trials, with the task being to mimic the observed steering. Behavioral and fMRI results confirmed previous findings of superior parietal lobe (SPL) recruitment during steering trials, with a more extensive parietal and sensorimotor network during steering “preview” compared to steering “near” trials. Correlational analysis of fMRI data with respect to individual behavioral performance revealed that there was increased activation in the SPL in participants who exhibited smoother steering performance. These findings indicate that there is a role for the SPL in encoding path defining targets or obstacles during forward locomotion, which also provides a potential neural underpinning to explain improved steering performance on an individual basis.
Collapse
Affiliation(s)
- Jac Billington
- Institute of Psychological Sciences, Faculty of Medicine and Health, The University of Leeds Leeds, UK
| | | | | |
Collapse
|
18
|
de Oliveira RF, Wann JP. Driving skills of young adults with developmental coordination disorder: Maintaining control and avoiding hazards. Hum Mov Sci 2012; 31:721-9. [DOI: 10.1016/j.humov.2011.06.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2010] [Revised: 06/16/2011] [Accepted: 06/21/2011] [Indexed: 10/17/2022]
|
19
|
Visuomotor control of steering: the artefact of the matter. Exp Brain Res 2011; 208:475-89. [DOI: 10.1007/s00221-010-2530-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2010] [Accepted: 09/20/2010] [Indexed: 10/18/2022]
|
20
|
François M, Morice A, Blouin J, Montagne G. Age-related decline in sensory processing for locomotion and interception. Neuroscience 2011; 172:366-78. [DOI: 10.1016/j.neuroscience.2010.09.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2010] [Revised: 08/12/2010] [Accepted: 09/14/2010] [Indexed: 10/18/2022]
|
21
|
Egger SW, Engelhardt HR, Britten KH. Monkey steering responses reveal rapid visual-motor feedback. PLoS One 2010; 5:e11975. [PMID: 20694144 PMCID: PMC2915918 DOI: 10.1371/journal.pone.0011975] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2010] [Accepted: 07/08/2010] [Indexed: 12/04/2022] Open
Abstract
The neural mechanisms underlying primate locomotion are largely unknown. While behavioral and theoretical work has provided a number of ideas of how navigation is controlled, progress will require direct physiolgical tests of the underlying mechanisms. In turn, this will require development of appropriate animal models. We trained three monkeys to track a moving visual target in a simple virtual environment, using a joystick to control their direction. The monkeys learned to quickly and accurately turn to the target, and their steering behavior was quite stereotyped and reliable. Monkeys typically responded to abrupt steps of target direction with a biphasic steering movement, exhibiting modest but transient overshoot. Response latencies averaged approximately 300 ms, and monkeys were typically back on target after about 1 s. We also exploited the variability of responses about the mean to explore the time-course of correlation between target direction and steering response. This analysis revealed a broad peak of correlation spanning approximately 400 ms in the recent past, during which steering errors provoke a compensatory response. This suggests a continuous, visual-motor loop controls steering behavior, even during the epoch surrounding transient inputs. Many results from the human literature also suggest that steering is controlled by such a closed loop. The similarity of our results to those in humans suggests the monkey is a very good animal model for human visually guided steering.
Collapse
Affiliation(s)
- Seth W. Egger
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Heidi R. Engelhardt
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California Davis, Davis, California, United States of America
- * E-mail:
| |
Collapse
|
22
|
Wilkie RM, Kountouriotis GK, Merat N, Wann JP. Using vision to control locomotion: looking where you want to go. Exp Brain Res 2010; 204:539-47. [PMID: 20556368 DOI: 10.1007/s00221-010-2321-4] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2010] [Accepted: 05/29/2010] [Indexed: 11/30/2022]
Abstract
Looking at the inside edge of the road when steering a bend seems to be a well-established strategy linked to using a feature called the tangent point. An alternative proposal suggests that the gaze patterns observed when steering result from looking at the points in the world through which one wishes to pass. In this explanation fixation on or near the tangent point results from trying to take a trajectory that cuts the corner. To test these accounts, we recorded gaze and steering when taking different paths along curved roadways. Participants could gauge and maintain their lateral distance, but crucially, gaze was predominantly directed to the region proximal to the desired path rather than toward the tangent point per se. These results show that successful control of high-speed locomotion requires fixations in the direction you want to steer rather than using a single road feature like the tangent point.
Collapse
Affiliation(s)
- R M Wilkie
- Institute of Psychological Sciences, University of Leeds, Leeds LS2 9JT, UK.
| | | | | | | |
Collapse
|
23
|
Environmental constraints modify the way an interceptive action is controlled. Exp Brain Res 2010; 202:397-411. [PMID: 20058151 DOI: 10.1007/s00221-009-2147-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2009] [Accepted: 12/15/2009] [Indexed: 10/20/2022]
Abstract
This study concerns the process by which agents select control laws. Participants adjusted their walking speed in a virtual environment in order to intercept approaching targets. Successful interception can be achieved with a constant bearing angle (CBA) strategy that relies on prospective information, or with a modified required velocity (MRV) strategy, which also includes predictive information. We manipulated the curvature of the target paths and the display condition of these paths. The curvature manipulation had large effects on the walking kinematics when the target paths were not displayed (informationally poor display). In contrast, the walking kinematics were less affected by the curvature manipulation when the target paths were displayed (informationally rich display). This indicates that participants used an MRV strategy in the informationally rich display and a CBA strategy in the informationally poor display. Quantitative fits of the respective models confirm this information-driven switch between the use of a strategy that relies on prospective information and a strategy that includes predictive information. We conclude that agents are able of taking advantage of available information by selecting a suitable control law.
Collapse
|
24
|
Influence of Presbyopic Corrections on Driving-Related Eye and Head Movements. Optom Vis Sci 2009; 86:E1267-75. [DOI: 10.1097/opx.0b013e3181bb41fa] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
25
|
Andrew Browning N, Grossberg S, Mingolla E. Cortical dynamics of navigation and steering in natural scenes: Motion-based object segmentation, heading, and obstacle avoidance. Neural Netw 2009; 22:1383-98. [PMID: 19502003 DOI: 10.1016/j.neunet.2009.05.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2008] [Revised: 05/07/2009] [Accepted: 05/18/2009] [Indexed: 10/20/2022]
Abstract
Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT(-)/MSTv and MT(+)/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model's retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT(+) interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT(-) interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.
Collapse
Affiliation(s)
- N Andrew Browning
- Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215, USA
| | | | | |
Collapse
|
26
|
Limitations of feedforward control in multiple-phase steering movements. Exp Brain Res 2009; 195:481-7. [PMID: 19404622 DOI: 10.1007/s00221-009-1813-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2008] [Accepted: 04/09/2009] [Indexed: 10/20/2022]
Abstract
When attempting to perform bi-phasic steering movements (such as a lane change) in the absence of visual and inertial feedback, drivers produce a systematic heading error in the direction of the lane change (Wallis et al., Curr Biol 12(4):295-299, 2002; J Exp Psychol Hum Percept Perform 33(55):1127-1144, 2007). Theories of steering control which employ exclusively open-loop control mechanisms cannot accommodate this finding. In this article we show that a similar steering error occurs with obstacle avoidance, and offer compelling evidence that it stems from a seemingly general failure of human operators to correctly internalise the dynamics of the steering wheel. With respect to lateral position, the steering wheel is an acceleration control device, but we present data indicating that drivers treat it as a rate control device. Previous findings from Wallis et al. can be explained the same way. Since an open-loop control mechanism will never succeed when the dynamics of the controller are internalised improperly, we go on to conclude that regular, appropriately timed sensory feedback-predominantly from vision-is necessary for regulating heading, even during well-practiced, everyday manoeuvres such as lane changing and obstacle avoidance.
Collapse
|
27
|
Bayesian motion estimation accounts for a surprising bias in 3D vision. Proc Natl Acad Sci U S A 2008; 105:12087-92. [PMID: 18697948 DOI: 10.1073/pnas.0804378105] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Determining the approach of a moving object is a vital survival skill that depends on the brain combining information about lateral translation and motion-in-depth. Given the importance of sensing motion for obstacle avoidance, it is surprising that humans make errors, reporting an object will miss them when it is on a collision course with their head. Here we provide evidence that biases observed when participants estimate movement in depth result from the brain's use of a "prior" favoring slow velocity. We formulate a Bayesian model for computing 3D motion using independently estimated parameters for the shape of the visual system's slow velocity prior. We demonstrate the success of this model in accounting for human behavior in separate experiments that assess both sensitivity and bias in 3D motion estimation. Our results show that a surprising perceptual error in 3D motion perception reflects the importance of prior probabilities when estimating environmental properties.
Collapse
|
28
|
Quek F, Ehrich R, Lockhart T. As Go the Feet … : On the Estimation of Attentional Focus from Stance. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION : A PUBLICATION OF THE ASSOCIATION FOR COMPUTING MACHINERY 2008; 2008:97-104. [PMID: 20830212 PMCID: PMC2935654 DOI: 10.1145/1452392.1452412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The estimation of the direction of visual attention is critical to a large number of interactive systems. This paper investigates the cross-modal relation of the position of one's feet (or standing stance) to the focus of gaze. The intuition is that while one CAN have a range of attentional foci from a particular stance, one may be MORE LIKELY to look in specific directions given an approach vector and stance. We posit that the cross-modal relationship is constrained by biomechanics and personal style. We define a stance vector that models the approach direction before stopping and the pose of a subject's feet. We present a study where the subjects' feet and approach vector are tracked. The subjects read aloud contents of note cards in 4 locations. The order of `visits' to the cards were randomized. Ten subjects read 40 lines of text each, yielding 400 stance vectors and gaze directions. We divided our data into 4 sets of 300 training and 100 test vectors and trained a neural net to estimate the gaze direction given the stance vector. Our results show that 31% our gaze orientation estimates were within 5°, 51% of our estimates were within 10°, and 60% were within 15°. Given the ability to track foot position, the procedure is minimally invasive.
Collapse
Affiliation(s)
- Francis Quek
- Center for Human-Computer Interaction Virginia Tech
| | | | | |
Collapse
|
29
|
Fajen BR, Warren WH. Behavioral dynamics of intercepting a moving target. Exp Brain Res 2007; 180:303-19. [PMID: 17273872 DOI: 10.1007/s00221-007-0859-6] [Citation(s) in RCA: 96] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2006] [Accepted: 01/05/2007] [Indexed: 11/26/2022]
Abstract
From matters of survival like chasing prey, to games like football, the problem of intercepting a target that moves in the horizontal plane is ubiquitous in human and animal locomotion. Recent data show that walking humans turn onto a straight path that leads a moving target by a constant angle, with some transients in the target-heading angle. We test four control strategies against the human data: (1) pursuit, or nulling the target-heading angle beta, (2) computing the required interception angle beta (3) constant target-heading angle, or nulling change in the target-heading angle beta and (4) constant bearing, or nulling change in the bearing direction of the target psi which is equivalent to nulling change in the target-heading angle while factoring out the turning rate (beta - phi) We show that human interception behavior is best accounted for by the constant bearing model, and that it is robust to noise in its input and parameters. The models are also evaluated for their performance with stationary targets, and implications for the informational basis and neural substrate of steering control are considered. The results extend a dynamical systems model of human locomotor behavior from static to changing environments.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Carnegie Building 308, 110 8th Street, Troy, NY 12180-3590, USA.
| | | |
Collapse
|
30
|
Bastin J, Calvin S, Montagne G. Muscular proprioception contributes to the control of interceptive actions. J Exp Psychol Hum Percept Perform 2006; 32:964-72. [PMID: 16846291 DOI: 10.1037/0096-1523.32.4.964] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The authors proposed a model of the control of interceptive action over a ground plane (Chardenon, Montagne, Laurent, & Bootsma, 2004). This model is based on the cancellation of the rate of change of the angle between the current position of the target and the direction of displacement (i.e., the bearing angle). While several sources of visual information specify this angle, the contribution of proprioceptive information has not been directly tested. In this study, the authors used a virtual reality setup to study the role of proprioception when intercepting a moving target. In a series of experiments, the authors manipulated proprioceptive information by using the tendon vibration paradigm. The results revealed that proprioception is crucial not only to locate a moving target with respect to the body but also, and more importantly, to produce online displacement velocity changes to intercept a moving target. These findings emphasize the importance of proprioception in the control of interceptive action and illustrate the relevance of our model to account for the regulations produced by the participants.
Collapse
Affiliation(s)
- Julien Bastin
- Université de la Méditerranée, Faculté des Sciences du Sport, UMR Mouvement et Perception, Marseille, France
| | | | | |
Collapse
|
31
|
Abstract
How might one account for the organization in behavior without attributing it to an internal control structure? The present article develops a theoretical framework called behavioral dynamics that integrates an information-based approach to perception with a dynamical systems approach to action. For a given task, the agent and its environment are treated as a pair of dynamical systems that are coupled mechanically and informationally. Their interactions give rise to the behavioral dynamics, a vector field with attractors that correspond to stable task solutions, repellers that correspond to avoided states, and bifurcations that correspond to behavioral transitions. The framework is used to develop theories of several tasks in which a human agent interacts with the physical environment, including bouncing a ball on a racquet, balancing an object, braking a vehicle, and guiding locomotion. Stable, adaptive behavior emerges from the dynamics of the interaction between a structured environment and an agent with simple control laws, under physical and informational constraints.
Collapse
Affiliation(s)
- William H Warren
- Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912, USA.
| |
Collapse
|
32
|
Wilkie RM, Wann JP. The role of visual and nonvisual information in the control of locomotion. J Exp Psychol Hum Percept Perform 2006; 31:901-11. [PMID: 16262487 DOI: 10.1037/0096-1523.31.5.901] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Collapse
Affiliation(s)
- Richard M Wilkie
- Department of Psychology, University of Reading, Reading, United Kingdom.
| | | |
Collapse
|
33
|
Turano KA, Yu D, Hao L, Hicks JC. Optic-flow and egocentric-direction strategies in walking: Central vs peripheral visual field. Vision Res 2005; 45:3117-32. [PMID: 16084556 DOI: 10.1016/j.visres.2005.06.017] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2004] [Revised: 06/09/2005] [Accepted: 06/14/2005] [Indexed: 10/25/2022]
Abstract
The impact of a central or peripheral visual field loss on the vision strategy used to guide walking was determined by measuring walking paths of visually impaired participants. An immersive virtual environment was used to dissociate the expected paths of the optic-flow and egocentric-direction strategies by offsetting the walker's point of view from the actual direction of walking. Environments consisted of a goal within a forest, the goal alone, or the forest alone following a brief presentation of the goal. The first two environments allowed an evaluation of the visual information used in a goal-directed task whereas the third environment investigated the information used in a memory-guided task. Participants had either a central (CFL) or peripheral visual field loss (PFL) or were fully sighted (FS). Results showed that, for the goal-directed task, the CFL group was less influenced by optic flow than was an age-matched FS group. Optic flow decreased heading error by only 1.3 degrees (16%) in the CFL group compared to 3.6 degrees (42%) in the FS group. The PFL group showed an optic-flow influence (2.4 degrees or 26%) comparable to an older, age-matched FS group (2.9 degrees or 31%). For the memory-guided task, all but the PFL group had heading errors comparable to those obtained in the goal-alone scene, demonstrating the ability to use an egocentric-direction strategy with a stored representation of either the goal's position or an offset relative to a landmark instead of a visible goal. The paths of the PFL group veered significantly from the predicted paths of both the optic-flow and egocentric-direction strategies. The findings of this study suggest that central vision is important for using optic flow to guide walking, whereas peripheral vision is important for establishing and/or updating an accurate representation of spatial structure for navigation.
Collapse
Affiliation(s)
- Kathleen A Turano
- The Johns Hopkins University School of Medicine, Wilmer Eye Institute, Baltimore, MD, USA.
| | | | | | | |
Collapse
|
34
|
Brooks JO, Tyrrell RA, Frank TA. The Effects of Severe Visual Challenges on Steering Performance in Visually Healthy Young Drivers. Optom Vis Sci 2005; 82:689-97. [PMID: 16127334 DOI: 10.1097/01.opx.0000174722.96171.86] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Two experiments explored the extent to which induced blur, reduced luminance, and reduced visual fields affect drivers' steering performance in a driving simulator. METHODS In experiment 1, ten young participants (M = 21.2 years) drove at approximately 89 km/h (55 mph) along a curvy roadway while being exposed to blur (0 to + 10 D), luminance (0.003 to 16.7 cd/m), and visual field (1.7 and 150 degrees) manipulations. In experiment 2, a new group of ten young participants (M = 18.5 years) drove while exposed to seven visual field sizes (1.7 to 150 degrees). RESULTS Steering was sensitive to a reduced field size but not to the blur and luminance challenges. Acuity, on the other hand, was sensitive to the blur and luminance challenges but not to reduced field size. DISCUSSION In healthy young drivers, steering performance is remarkably robust to severe blur and to extremely low luminances. These results support a key element of the selective degradation hypothesis advanced by Leibowitz and colleagues--that steering abilities are preserved at night even when the ability to recognize objects and hazards is not. Additional research should address the other element of this hypothesis--that drivers fail to appreciate the extent to which their visual abilities are degraded at night.
Collapse
Affiliation(s)
- Johnell O Brooks
- Psychology Department, Clemson University, Clemson, South Carolina 29634-1355, USA.
| | | | | |
Collapse
|
35
|
Abstract
PURPOSE Driving is essentially a visuomotor task, and there is now compelling evidence that the disproportionate number of road accidents under night driving conditions is linked to changes in visual performance resulting from reduced lighting. The objective of this article is to establish the extent to which vision is either rod-or cone-dominated under night driving conditions. METHODS Visual thresholds are measured under lighting conditions that simulate urban lighting. Dark adaptation curves are obtained under three ambient lighting conditions ranging from low (0.1 cd/m) to high (5 cd/m) mesopic levels of retinal adaptation using circular discs of different sizes (1 degree, 2 degrees, 3 degrees, and 5 degrees) presented at retinal eccentricities of 0 degrees, 10 degrees, 20 degrees, 30 degrees, and 40 degrees. RESULTS The dark adaptation curves exhibit the classic inflection point between rod and cone activity for the lower levels of ambient illumination but a simple monophasic function for the high mesopic levels (>0.5 lux). Adaptation rates are four times faster for the higher compared with the lower illumination level and twice as fast for central compared with peripheral presentation. CONCLUSIONS The data suggest that vision is mediated by cone pathways at 5 lux and by rod pathways at 0.5/0.1 lux. This shift does not profoundly affect sensitivity, but because rod pathways are known to be slower than cone pathways, it will certainly affect observers' ability to respond to rapidly changing viewing conditions such as are encountered when driving at night.
Collapse
Affiliation(s)
- Sotiris Plainis
- Vardinoyiannion Eye Institute of Crete, Department of Ophthalmology, School of Medicine, University of Crete, Heraklion, Crete, Greece.
| | | | | |
Collapse
|
36
|
Chardenon A, Montagne G, Laurent M, Bootsma RJ. A robust solution for dealing with environmental changes in intercepting moving balls. J Mot Behav 2005; 37:52-64. [PMID: 15642692 DOI: 10.3200/jmbr.37.1.52-62] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The authors tested whether a simple model based on the cancellation of the rate of change in bearing angle could account for the behavioral adaptations produced when individuals intercept moving balls while walking. In Experiment 1, the place of arrival of the ball and the angle of approach were varied. In accord with the model, velocity regulations were earlier and more pronounced the larger the angle of approach. In Experiment 2, ball speed unexpectedly changed during a trial, once again highlighting participants' functional velocity adaptations. A direct test of the model on the basis of each individual trial (N = 256) revealed that, on average, 70% of the total variance could be explained. Together, those results confirm the usefulness of such a robust strategy in the control of interceptive tasks.
Collapse
Affiliation(s)
- A Chardenon
- Faculty of Sport Sciences, Movement and Perception Laboratory, University of the Mediterranean and CNRS, 163 Avenue de Luminy, 13009 Marseille, France
| | | | | | | |
Collapse
|
37
|
Bastin J, Montagne G. The perceptual support of goal-directed displacement is context-dependent. Neurosci Lett 2005; 376:121-6. [PMID: 15698933 DOI: 10.1016/j.neulet.2004.11.040] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2004] [Revised: 11/10/2004] [Accepted: 11/16/2004] [Indexed: 11/23/2022]
Abstract
This study investigates the perceptual-motor organisation underlying the control of goal-directed displacement. We used a virtual reality set-up to study the locomotor interception of a moving ball. Subjects had to intercept moving balls by modifying displacement velocity if necessary, while the ball's place of arrival and the environment were manipulated. The results showed that subjects simultaneously managed multiple sources of information and placed priority on the most salient variables, depending on the task and environmental constraints.
Collapse
Affiliation(s)
- Julien Bastin
- Faculté des Sciences du Sport, Université de la Méditerranée, UMR Mouvement et Perception 163 Avenue de Luminy CP 910, 13288 Marseille Cedex 9, France
| | | |
Collapse
|
38
|
Harris JM, Drga VF. Using visual direction in three-dimensional motion perception. Nat Neurosci 2005; 8:229-33. [PMID: 15665878 DOI: 10.1038/nn1389] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2004] [Accepted: 12/28/2004] [Indexed: 11/08/2022]
Abstract
The eyes receive slightly different views of the world, and the differences between their images (binocular disparity) are used to see depth. Several authors have suggested how the brain could exploit this information for three-dimensional (3D) motion perception, but here we consider a simpler strategy. Visual direction is the angle between the direction of an object and the direction that an observer faces. Here we describe human behavioral experiments in which observers use visual direction, rather than binocular information, to estimate an object's 3D motion even though this causes them to make systematic errors. This suggests that recent models of binocular 3D motion perception may not reflect the strategies that human observers actually use.
Collapse
Affiliation(s)
- Julie M Harris
- School of Psychology, University of St. Andrews, St. Mary's College, South Street, St. Andrews, Fife, Scotland, KY16 9JP, UK.
| | | |
Collapse
|
39
|
Abstract
How do people walk to a moving target, and what visual information do they use to do so? Under a pursuit strategy, one would head toward the target's current position, whereas under an interception strategy, one would lead the target, ideally by maintaining a constant target-heading angle (or constant bearing angle). Either strategy may be guided by the egocentric direction of the target, local optic flow from the target, or global optic flow from the background. In four experiments, participants walked through a virtual environment to reach a target moving at a constant velocity. Regardless of the initial conditions, they walked ahead of the target for most of a trial at a fairly constant speed, consistent with an interception strategy (experiment 1). This behavior can be explained by trying to maintain a constant target-heading angle while trying to walk a straight path, with transient steering dynamics. In contrast to previous results for stationary targets, manipulation of the local optic flow from the target (experiment 2) and the global optic flow of the background (experiments 3 and 4) failed to influence interception behavior. Relative motion between the target and the background did affect the path slightly, presumably owing to its effect on perceived target motion. We conclude that humans use an interception strategy based on the egocentric direction of a moving target.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Carnegie Building 305, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180-3590, USA.
| | | |
Collapse
|
40
|
|
41
|
Chardenon A, Montagne G, Laurent M, Bootsma RJ. The perceptual control of goal-directed locomotion: a common control architecture for interception and navigation? Exp Brain Res 2004; 158:100-8. [PMID: 15042262 DOI: 10.1007/s00221-004-1880-7] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2003] [Accepted: 02/10/2004] [Indexed: 10/26/2022]
Abstract
Intercepting a moving object while locomoting is a highly complex and demanding ability. Notwithstanding the identification of several informational candidates, the role of perceptual variables in the control process underlying such skills remains an open question. In this study we used a virtual reality set-up for studying locomotor interception of a moving ball. The subject had to walk along a straight path and could freely modify forward velocity, if necessary, in order to intercept-with the head-a ball moving along a straight path that led it to cross the agent's displacement axis. In a series of experiments we manipulated a local (ball size) and a global (focus of expansion) component of the visual flow but also the egocentric orientation of the ball. The experimental observations are well captured by a dynamic model linking the locomotor acceleration to properties of both global flow and egocentric direction. More precisely the changes in locomotor velocity depend on a linear combination of the change in bearing angle and the change in egocentric orientation, allowing the emergence of adaptive behavior under a variety of circumstances. We conclude that the mechanisms underlying the control of different goal-directed locomotion tasks (i.e. steering and interceptive tasks) could share a common architecture.
Collapse
Affiliation(s)
- A Chardenon
- UMR Movement and Perception, University of the Mediterranean, Marseille, France
| | | | | | | |
Collapse
|