1
|
Hu H, Cheng M, Gao F, Sheng Y, Zheng R. Driver's Preview Modeling Based on Visual Characteristics through Actual Vehicle Tests. SENSORS 2020; 20:s20216237. [PMID: 33142911 PMCID: PMC7663110 DOI: 10.3390/s20216237] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 10/08/2020] [Accepted: 10/22/2020] [Indexed: 11/16/2022]
Abstract
This paper proposes a method for obtaining driver's fixation points and establishing a preview model based on actual vehicle tests. Firstly, eight drivers were recruited to carry out the actual vehicle test on the actual straight and curved roads. The curvature radii of test curved roads were selected to be 200, 800, and 1500 m. Subjects were required to drive at a speed of 50, 70 and 90 km/h, respectively. During the driving process, eye movement data of drivers were collected using a head-mounted eye tracker, and road front scene images and vehicle statuses were collected simultaneously. An image-world coordinate mapping model of the visual information of drivers was constructed by performing an image distortion correction and matching the images from the driving recorder. Then, fixation point data for drivers were accordingly obtained using the Identification-Deviation Threshold (I-DT) algorithm. In addition, the Jarque-Bera test was used to verify the normal distribution characteristics of these data and to fit the distribution parameters of the normal function. Furthermore, the preview points were extracted accordingly and projected into the world coordinate. At last, the preview data obtained under these conditions are fit to build general preview time probability density maps for different driving speeds and road curvatures. This study extracts the preview characteristics of drivers through actual vehicle tests, which provides a visual behavior reference for the humanized vehicle control of an intelligent vehicle.
Collapse
Affiliation(s)
- Hongyu Hu
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China; (H.H.); (M.C.); (Y.S.)
| | - Ming Cheng
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China; (H.H.); (M.C.); (Y.S.)
| | - Fei Gao
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China; (H.H.); (M.C.); (Y.S.)
- Correspondence:
| | - Yuhuan Sheng
- State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China; (H.H.); (M.C.); (Y.S.)
| | - Rencheng Zheng
- Key Laboratory of Mechanism Theory and Equipment Design, Ministry of Education, Tianjin University, Tianjin 300072, China;
| |
Collapse
|
2
|
Macuga KL. Multisensory Influences on Driver Steering During Curve Navigation. HUMAN FACTORS 2019; 61:337-347. [PMID: 30320509 DOI: 10.1177/0018720818805898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The effects of inertial (vestibular and somatosensory) information on driver steering during curve navigation were investigated, using an electric four-wheel mobility vehicle outfitted with a steering wheel and a portable virtual reality system. BACKGROUND When driving, multiple sources of perceptual information are available. Researchers have focused on visual information, which plays a critical role in steering control. However, it is not yet well established how inertial information might contribute. METHODS I biased inertial cues by varying visual/inertial gains (doubled, halved, reversed), as drivers negotiated curving paths, and measured steering accuracy and efficiency. I also assessed whether being exposed to inertial biases had an impact on postbias steering by comparing pre- and posttest session performance measures. RESULTS Doubling or halving inertial cues had little effect on steering performance. Inertial information only disrupted steering when it was reversed with respect to visual information. Over time, the influence of this extreme inertial bias was reduced though not eliminated. Postbias curve navigation performance was not impacted, likely because participants had learned to disregard, rather than integrate, biased inertial cues. CONCLUSION Results suggest that biased inertial information has little influence on curve navigation performance when visual information is available. APPLICATION Though inertial cues may be important for open-loop steering, when visual cues are unavailable, their role in closed-loop steering seems less influential. This has implications for driving simulation and suggests that inertial discrepancies due to limitations in motion-cuing capabilities may not be all that problematic for the simulation of closed-loop curve steering tasks.
Collapse
|
3
|
Okafuji Y, Mole CD, Merat N, Fukao T, Yokokohji Y, Inou H, Wilkie RM. Steering bends and changing lanes: The impact of optic flow and road edges on two point steering control. J Vis 2018; 18:14. [PMID: 30242386 DOI: 10.1167/18.9.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Successful driving involves steering corrections that respond to immediate positional errors while also anticipating upcoming changes to the road layout ahead. In popular steering models these tasks are often treated as separate functions using two points: the near region for correcting current errors, and the far region for anticipating future steering requirements. Whereas two-point control models can capture many aspects of driver behavior, the nature of perceptual inputs to these two "points" remains unclear. Inspired by experiments that solely focused on road-edge information (Land & Horwood, 1995), two-point models have tended to ignore the role of optic flow during steering control. There is recent evidence demonstrating that optic flow should be considered within two-point control steering models (Mole, Kountouriotis, Billington, & Wilkie, 2016). To examine the impact of optic flow and road edges on two-point steering control we used a driving simulator to selectively and systematically manipulate these components. We removed flow and/or road-edge information from near or far regions of the scene, and examined how behaviors changed when steering along roads where the utility of far-road information varied. While steering behaviors were strongly influenced by the road-edges, there were also clear contributions of optic flow to steering responses. The patterns of steering were not consistent with optic flow simply feeding into two-point control; rather, the global optic flow field appeared to support effective steering responses across the time-course of each trajectory.
Collapse
Affiliation(s)
- Yuki Okafuji
- School of Psychology, University of Leeds, Leeds, UK.,Institute for Transport Studies, University of Leeds, Leeds, UK.,Department of Electrical and Electronic Engineering, Ritsumeikan University, Kusatsu-shi, Japan.,Department of Mechanical Engineering, Kobe University, Kobe-shi, Japan
| | | | - Natasha Merat
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Takanori Fukao
- Department of Electrical and Electronic Engineering, Ritsumeikan University, Kusatsu-shi, Japan
| | | | - Hiroshi Inou
- DENSO International America, Inc., Southfield, MI, USA
| | | |
Collapse
|
4
|
When flow is not enough: evidence from a lane changing task. PSYCHOLOGICAL RESEARCH 2018; 84:834-849. [PMID: 30088078 DOI: 10.1007/s00426-018-1070-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Accepted: 07/31/2018] [Indexed: 10/28/2022]
Abstract
Humans are able to estimate their heading on the basis of optic flow information and it has been argued that we use flow in this way to guide navigation. Consistent with this idea, several studies have reported good navigation performance in flow fields. However, one criticism of these studies is that they have generally focused on the task of walking or steering towards a target, offering an additional, salient directional cue. Hence, it remains a matter of debate as to whether humans are truly able to control steering in the presence of optic flow alone. In this study, we report a set of maneuvers carried out in flow fields in the absence of a physical target. To do this, we studied the everyday task of lane changing, a commonplace multiphase steering maneuver which can be conceptualized without the need for a target. What is more (and here is the crucial quirk), previous literature has found that in the absence of visual feedback, drivers show a systematic, asymmetric steering response, resulting in a systematic final heading error. If optic flow is sufficient for controlling navigation through our environment, we would expect this asymmetry to disappear whenever optic flow is provided. However, our results show that this asymmetry persisted, even in the presence of a flow field, implying that drivers are unable to use flow to guide normal steering responses in this task.
Collapse
|
5
|
Zhao H, Warren WH. On-line and model-based approaches to the visual control of action. Vision Res 2014; 110:190-202. [PMID: 25454700 DOI: 10.1016/j.visres.2014.10.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 10/08/2014] [Accepted: 10/09/2014] [Indexed: 10/24/2022]
Abstract
Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| | - William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| |
Collapse
|
6
|
Murray NG, Ponce de Leon M, Ambati VNP, Saucedo F, Kennedy E, Reed-Jones RJ. Simulated visual field loss does not alter turning coordination in healthy young adults. J Mot Behav 2014; 46:423-31. [PMID: 25204364 DOI: 10.1080/00222895.2014.931272] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Turning, while walking, is an important component of adaptive locomotion. Current hypotheses regarding the motor control of body segment coordination during turning suggest heavy influence of visual information. The authors aimed to examine whether visual field impairment (central loss or peripheral loss) affects body segment coordination during walking turns in healthy young adults. No significant differences in the onset time of segments or intersegment coordination were observed because of visual field occlusion. These results suggest that healthy young adults can use visual information obtained from central and peripheral visual fields interchangeably, pointing to flexibility of visuomotor control in healthy young adults. Further study in populations with chronic visual impairment and those with turning difficulties are warranted.
Collapse
Affiliation(s)
- Nicholas G Murray
- a Interdisciplinary Health Sciences, College of Health Sciences , The University of Texas at El Paso
| | | | | | | | | | | |
Collapse
|
7
|
Vansteenkiste P, Van Hamme D, Veelaert P, Philippaerts R, Cardon G, Lenoir M. Cycling around a curve: the effect of cycling speed on steering and gaze behavior. PLoS One 2014; 9:e102792. [PMID: 25068380 PMCID: PMC4113223 DOI: 10.1371/journal.pone.0102792] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Accepted: 06/23/2014] [Indexed: 11/18/2022] Open
Abstract
Although it is generally accepted that visual information guides steering, it is still unclear whether a curvature matching strategy or a ‘look where you are going’ strategy is used while steering through a curved road. The current experiment investigated to what extent the existing models for curve driving also apply to cycling around a curve, and tested the influence of cycling speed on steering and gaze behavior. Twenty-five participants were asked to cycle through a semicircular lane three consecutive times at three different speeds while staying in the center of the lane. The observed steering behavior suggests that an anticipatory steering strategy was used at curve entrance and a compensatory strategy was used to steer through the actual bend of the curve. A shift of gaze from the center to the inside edge of the lane indicates that at low cycling speed, the ‘look where you are going’ strategy was preferred, while at higher cycling speeds participants seemed to prefer the curvature matching strategy. Authors suggest that visual information from both steering strategies contributes to the steering system and can be used in a flexible way. Based on a familiarization effect, it can be assumed that steering is not only guided by vision but that a short-term learning component should also be taken into account.
Collapse
Affiliation(s)
- Pieter Vansteenkiste
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
- * E-mail:
| | - David Van Hamme
- Department of Telecommunications and Information Processing, Ghent University, Ghent, Belgium
| | - Peter Veelaert
- Department of Telecommunications and Information Processing, Ghent University, Ghent, Belgium
| | - Renaat Philippaerts
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| | - Greet Cardon
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| | - Matthieu Lenoir
- Department of Movement and Sports Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
8
|
Blind(fold)ed by science: a constant target-heading angle is used in visual and nonvisual pursuit. Psychon Bull Rev 2013; 20:923-34. [PMID: 23440726 DOI: 10.3758/s13423-013-0412-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous work investigating the strategies that observers use to intercept moving targets has shown that observers maintain a constant target-heading angle (CTHA) to achieve interception. Most of this work has concluded or indirectly assumed that vision is necessary to do this. We investigated whether blindfolded pursuers chasing a ball carrier holding a beeping football would utilize the same strategy that sighted observers use to chase a ball carrier. Results confirm that both blindfolded and sighted pursuers use a CTHA strategy in order to intercept targets, whether jogging or walking and irrespective of football experience and path and speed deviations of the ball carrier during the course of the pursuit. This work shows that the mechanisms involved in intercepting moving targets may be designed to use different sensory mechanisms in order to drive behavior that leads to the same end result. This has potential implications for the supramodal representation of motion perception in the human brain.
Collapse
|
9
|
Zaal PMT, Nieuwenhuizen FM, van Paassen MM, Mulder M. Modeling Human Control of Self-Motion Direction With Optic Flow and Vestibular Motion. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:544-556. [PMID: 22987529 DOI: 10.1109/tsmcb.2012.2212188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we investigate the effects of visual and motion stimuli on the manual control of one's direction of self-motion. In a flight simulator, subjects conducted an active target-following disturbance-rejection task, using a compensatory display. Simulating a vehicular control task, the direction of vehicular motion was shown on the outside visual display in two ways: an explicit presentation using a symbol and an implicit presentation, namely, through the focus of radial outflow that emerges from optic flow. In addition, the effects of the relative strength of congruent vestibular motion cues were investigated. The dynamic properties of human visual and vestibular motion perception paths were modeled using a control-theoretical approach. As expected, improved tracking performance was found for the configurations that explicitly showed the direction of self-motion. The human visual time delay increased with approximately 150 ms for the optic flow conditions, relative to explicit presentations. Vestibular motion, providing higher order information on the direction of self-motion, allowed subjects to partially compensate for this visual perception delay, improving performance. Parameter estimates of the operator control model show that, with vestibular motion, the visual feedback becomes stronger, indicating that operators are more confident to act on optic flow information when congruent vestibular motion cues are present.
Collapse
|
10
|
Wilkie RM, Johnson RL, Culmer PR, Allen R, Mon-Williams M. Looking at the task in hand impairs motor learning. J Neurophysiol 2012; 108:3043-8. [PMID: 22993255 DOI: 10.1152/jn.00440.2012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
"Visual capture" is the term used to describe vision being afforded a higher weighting than other sensory information. Visual capture can produce powerful illusory effects with individuals misjudging the size and position of their hands. The advent of laparoscopic surgical techniques raises the question of whether visual capture can interfere with an individual's rate of motor learning. We compared adaptation to distorted visual feedback in two groups: the Direct group appeared to have the advantage of directly viewing the input device, while the Indirect group used the same input device but viewed their movements on a remote screen. Counterintuitively, the Indirect group adapted more readily to distorted feedback and showed enhanced performance. The results show that visual capture impairs adaptation to distorted visual feedback, suggesting that surgeons need to avoid viewing their hands when learning laparoscopic techniques.
Collapse
Affiliation(s)
- Richard M Wilkie
- Institute of Psychological Sciences, University of Leeds, Leeds, United Kingdom.
| | | | | | | | | |
Collapse
|
11
|
Bernardin D, Kadone H, Bennequin D, Sugar T, Zaoui M, Berthoz A. Gaze anticipation during human locomotion. Exp Brain Res 2012; 223:65-78. [PMID: 22968738 DOI: 10.1007/s00221-012-3241-2] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2011] [Accepted: 08/20/2012] [Indexed: 11/29/2022]
Abstract
During locomotion, a top-down organization has been previously demonstrated with the head as a stabilized platform and gaze anticipating the horizontal direction of the trajectory. However, the quantitative assessment of the anticipatory sequence from gaze to trajectory and body segments has not been documented. The present paper provides a detailed investigation into the spatial and temporal anticipatory relationships among the direction of gaze and body segments during locomotion. Participants had to walk along several mentally simulated complex trajectories, without any visual cues indicating the trajectory to follow. The trajectory shapes were presented to the participants on a sheet of paper. Our study includes an analysis of the relationships between horizontal gaze anticipatory behavior direction and the upcoming changes in the trajectory. Our findings confirm the following: 1) The hierarchical ordered organization of gaze and body segment orientations during complex trajectories and free locomotion. Gaze direction anticipates the head orientation, and head orientation anticipates reorientation of the other body segments. 2) The influence of the curvature of the trajectory and constraints of the tasks on the temporal and spatial relationships between gaze and the body segments: Increased curvature resulted in increased time and spatial anticipation. 3) A different sequence of gaze movements at inflection points where gaze plans a much later segment of the trajectory.
Collapse
Affiliation(s)
- Delphine Bernardin
- LPPA, UMR7152, CNRS-Collège de France, 11, Place Marcelin Berthelot, 75005, Paris, France.
| | | | | | | | | | | |
Collapse
|
12
|
Campos JL, Butler JS, Bülthoff HH. Multisensory integration in the estimation of walked distances. Exp Brain Res 2012; 218:551-65. [PMID: 22411581 DOI: 10.1007/s00221-012-3048-1] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Accepted: 02/21/2012] [Indexed: 10/28/2022]
Abstract
When walking through space, both dynamic visual information (optic flow) and body-based information (proprioceptive and vestibular) jointly specify the magnitude of distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, less is known about how they are integrated when simultaneously present. Many studies have shown that sensory information is integrated using a weighted linear sum, yet little is known about whether this holds true for the integration of visual and body-based cues for travelled distance perception. In this study using Virtual Reality technologies, participants first travelled a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. The visual stimulus consisted of a long hallway and was presented in stereo via a head-mounted display. Body-based cues were provided either by walking in a fully tracked free-walking space (Exp. 1) or by being passively moved in a wheelchair (Exp. 2). Travelled distances were provided either through optic flow alone, body-based cues alone or through both cues combined. In the combined condition, visually specified distances were either congruent (1.0×) or incongruent (0.7× or 1.4×) with distances specified by body-based cues. Responses reflect a consistent combined effect of both visual and body-based information, with an overall higher influence of body-based cues when walking and a higher influence of visual cues during passive movement. When comparing the results of Experiments 1 and 2, it is clear that both proprioceptive and vestibular cues contribute to travelled distance estimates during walking. These observed results were effectively described using a basic linear weighting model.
Collapse
Affiliation(s)
- Jennifer L Campos
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
13
|
Campos J, Bülthoff H. Multimodal Integration during Self-Motion in Virtual Reality. Front Neurosci 2011. [DOI: 10.1201/9781439812174-38] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
14
|
Campos J, Bülthoff H. Multimodal Integration during Self-Motion in Virtual Reality. Front Neurosci 2011. [DOI: 10.1201/b11092-38] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
15
|
Egger SW, Engelhardt HR, Britten KH. Monkey steering responses reveal rapid visual-motor feedback. PLoS One 2010; 5:e11975. [PMID: 20694144 PMCID: PMC2915918 DOI: 10.1371/journal.pone.0011975] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2010] [Accepted: 07/08/2010] [Indexed: 12/04/2022] Open
Abstract
The neural mechanisms underlying primate locomotion are largely unknown. While behavioral and theoretical work has provided a number of ideas of how navigation is controlled, progress will require direct physiolgical tests of the underlying mechanisms. In turn, this will require development of appropriate animal models. We trained three monkeys to track a moving visual target in a simple virtual environment, using a joystick to control their direction. The monkeys learned to quickly and accurately turn to the target, and their steering behavior was quite stereotyped and reliable. Monkeys typically responded to abrupt steps of target direction with a biphasic steering movement, exhibiting modest but transient overshoot. Response latencies averaged approximately 300 ms, and monkeys were typically back on target after about 1 s. We also exploited the variability of responses about the mean to explore the time-course of correlation between target direction and steering response. This analysis revealed a broad peak of correlation spanning approximately 400 ms in the recent past, during which steering errors provoke a compensatory response. This suggests a continuous, visual-motor loop controls steering behavior, even during the epoch surrounding transient inputs. Many results from the human literature also suggest that steering is controlled by such a closed loop. The similarity of our results to those in humans suggests the monkey is a very good animal model for human visually guided steering.
Collapse
Affiliation(s)
- Seth W. Egger
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Heidi R. Engelhardt
- Center for Neuroscience, University of California Davis, Davis, California, United States of America
| | - Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California Davis, Davis, California, United States of America
- * E-mail:
| |
Collapse
|
16
|
Wilkie RM, Kountouriotis GK, Merat N, Wann JP. Using vision to control locomotion: looking where you want to go. Exp Brain Res 2010; 204:539-47. [PMID: 20556368 DOI: 10.1007/s00221-010-2321-4] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2010] [Accepted: 05/29/2010] [Indexed: 11/30/2022]
Abstract
Looking at the inside edge of the road when steering a bend seems to be a well-established strategy linked to using a feature called the tangent point. An alternative proposal suggests that the gaze patterns observed when steering result from looking at the points in the world through which one wishes to pass. In this explanation fixation on or near the tangent point results from trying to take a trajectory that cuts the corner. To test these accounts, we recorded gaze and steering when taking different paths along curved roadways. Participants could gauge and maintain their lateral distance, but crucially, gaze was predominantly directed to the region proximal to the desired path rather than toward the tangent point per se. These results show that successful control of high-speed locomotion requires fixations in the direction you want to steer rather than using a single road feature like the tangent point.
Collapse
Affiliation(s)
- R M Wilkie
- Institute of Psychological Sciences, University of Leeds, Leeds LS2 9JT, UK.
| | | | | | | |
Collapse
|
17
|
Environmental constraints modify the way an interceptive action is controlled. Exp Brain Res 2010; 202:397-411. [PMID: 20058151 DOI: 10.1007/s00221-009-2147-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2009] [Accepted: 12/15/2009] [Indexed: 10/20/2022]
Abstract
This study concerns the process by which agents select control laws. Participants adjusted their walking speed in a virtual environment in order to intercept approaching targets. Successful interception can be achieved with a constant bearing angle (CBA) strategy that relies on prospective information, or with a modified required velocity (MRV) strategy, which also includes predictive information. We manipulated the curvature of the target paths and the display condition of these paths. The curvature manipulation had large effects on the walking kinematics when the target paths were not displayed (informationally poor display). In contrast, the walking kinematics were less affected by the curvature manipulation when the target paths were displayed (informationally rich display). This indicates that participants used an MRV strategy in the informationally rich display and a CBA strategy in the informationally poor display. Quantitative fits of the respective models confirm this information-driven switch between the use of a strategy that relies on prospective information and a strategy that includes predictive information. We conclude that agents are able of taking advantage of available information by selecting a suitable control law.
Collapse
|
18
|
Limitations of feedforward control in multiple-phase steering movements. Exp Brain Res 2009; 195:481-7. [PMID: 19404622 DOI: 10.1007/s00221-009-1813-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2008] [Accepted: 04/09/2009] [Indexed: 10/20/2022]
Abstract
When attempting to perform bi-phasic steering movements (such as a lane change) in the absence of visual and inertial feedback, drivers produce a systematic heading error in the direction of the lane change (Wallis et al., Curr Biol 12(4):295-299, 2002; J Exp Psychol Hum Percept Perform 33(55):1127-1144, 2007). Theories of steering control which employ exclusively open-loop control mechanisms cannot accommodate this finding. In this article we show that a similar steering error occurs with obstacle avoidance, and offer compelling evidence that it stems from a seemingly general failure of human operators to correctly internalise the dynamics of the steering wheel. With respect to lateral position, the steering wheel is an acceleration control device, but we present data indicating that drivers treat it as a rate control device. Previous findings from Wallis et al. can be explained the same way. Since an open-loop control mechanism will never succeed when the dynamics of the controller are internalised improperly, we go on to conclude that regular, appropriately timed sensory feedback-predominantly from vision-is necessary for regulating heading, even during well-practiced, everyday manoeuvres such as lane changing and obstacle avoidance.
Collapse
|
19
|
Bastin J, Jacobs DM, Morice AHP, Craig C, Montagne G. Testing the role of expansion in the prospective control of locomotion. Exp Brain Res 2008; 191:301-12. [PMID: 18704385 DOI: 10.1007/s00221-008-1522-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2006] [Accepted: 07/25/2008] [Indexed: 11/29/2022]
Abstract
The constant bearing angle (CBA) strategy is a prospective strategy that permits the interception of moving objects. The purpose of the present study is to test this strategy. Participants were asked to walk through a virtual environment and to change, if necessary, their walking speed so as to intercept approaching targets. The targets followed either a rectilinear or a curvilinear trajectory and target size was manipulated both within trials (target size was gradually changed during the trial in order to bias expansion) and between trials (targets of different sizes were used). The curvature manipulation had a large effect on the kinematics of walking, which is in agreement with the CBA strategy. The target size manipulations also affected the kinematics of walking. Although these effects of target size are not predicted by the CBA strategy, quantitative comparisons of observed kinematics and the kinematics predicted by the CBA strategy showed good fits. Furthermore, predictions based on the CBA strategy were deemed superior to predictions based on a required velocity (V (REQ)) model. The role of target size and expansion in the prospective control of walking is discussed.
Collapse
Affiliation(s)
- Julien Bastin
- Faculté des Sciences du Sport, Institut des Sciences du Mouvement, Etienne-Jules MAREY, UMR 6233 Université de la Méditerranée and CNRS, 163 Avenue de Luminy, 13009 Marseille, France
| | | | | | | | | |
Collapse
|
20
|
Fajen BR, Warren WH. Behavioral dynamics of intercepting a moving target. Exp Brain Res 2007; 180:303-19. [PMID: 17273872 DOI: 10.1007/s00221-007-0859-6] [Citation(s) in RCA: 96] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2006] [Accepted: 01/05/2007] [Indexed: 11/26/2022]
Abstract
From matters of survival like chasing prey, to games like football, the problem of intercepting a target that moves in the horizontal plane is ubiquitous in human and animal locomotion. Recent data show that walking humans turn onto a straight path that leads a moving target by a constant angle, with some transients in the target-heading angle. We test four control strategies against the human data: (1) pursuit, or nulling the target-heading angle beta, (2) computing the required interception angle beta (3) constant target-heading angle, or nulling change in the target-heading angle beta and (4) constant bearing, or nulling change in the bearing direction of the target psi which is equivalent to nulling change in the target-heading angle while factoring out the turning rate (beta - phi) We show that human interception behavior is best accounted for by the constant bearing model, and that it is robust to noise in its input and parameters. The models are also evaluated for their performance with stationary targets, and implications for the informational basis and neural substrate of steering control are considered. The results extend a dynamical systems model of human locomotor behavior from static to changing environments.
Collapse
Affiliation(s)
- Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Carnegie Building 308, 110 8th Street, Troy, NY 12180-3590, USA.
| | | |
Collapse
|
21
|
Bastin J, Calvin S, Montagne G. Muscular proprioception contributes to the control of interceptive actions. J Exp Psychol Hum Percept Perform 2006; 32:964-72. [PMID: 16846291 DOI: 10.1037/0096-1523.32.4.964] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The authors proposed a model of the control of interceptive action over a ground plane (Chardenon, Montagne, Laurent, & Bootsma, 2004). This model is based on the cancellation of the rate of change of the angle between the current position of the target and the direction of displacement (i.e., the bearing angle). While several sources of visual information specify this angle, the contribution of proprioceptive information has not been directly tested. In this study, the authors used a virtual reality setup to study the role of proprioception when intercepting a moving target. In a series of experiments, the authors manipulated proprioceptive information by using the tendon vibration paradigm. The results revealed that proprioception is crucial not only to locate a moving target with respect to the body but also, and more importantly, to produce online displacement velocity changes to intercept a moving target. These findings emphasize the importance of proprioception in the control of interceptive action and illustrate the relevance of our model to account for the regulations produced by the participants.
Collapse
Affiliation(s)
- Julien Bastin
- Université de la Méditerranée, Faculté des Sciences du Sport, UMR Mouvement et Perception, Marseille, France
| | | | | |
Collapse
|
22
|
Macuga KL, Beall AC, Kelly JW, Smith RS, Loomis JM. Changing lanes: inertial cues and explicit path information facilitate steering performance when visual feedback is removed. Exp Brain Res 2006; 178:141-50. [PMID: 17091302 DOI: 10.1007/s00221-006-0718-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2006] [Accepted: 09/14/2006] [Indexed: 10/23/2022]
Abstract
Can driver steering behaviors, such as a lane change, be executed without visual feedback? In a recent study with a fixed-base driving simulator, drivers failed to execute the return phase of a lane change when steering without vision, resulting in systematic final heading errors biased in the direction of the lane change. Here we challenge the generality of that finding. Suppose that, when asked to perform a lane (position) change, drivers fail to recognize that a heading change is required to make a lateral position change. However, given an explicit path, the necessary heading changes become apparent. Here we demonstrate that when heading requirements are made explicit, drivers appropriately implement the return phase. More importantly, by using an electric vehicle outfitted with a portable virtual reality system, we also show that valid inertial information (i.e., vestibular and somatosensory cues) enables accurate steering behavior when vision is absent. Thus, the failure to properly execute a lane change in a driving simulator without a moving base does not present a fundamental problem for feed-forward driving behavior.
Collapse
Affiliation(s)
- Kristen L Macuga
- Department of Psychology, University of California, Santa Barbara, CA 93106-9660, USA.
| | | | | | | | | |
Collapse
|