1
|
Delle Monache S, Paolocci G, Scalici F, Conti A, Lacquaniti F, Indovina I, Bosco G. Interception of vertically approaching objects: temporal recruitment of the internal model of gravity and contribution of optical information. Front Physiol 2023; 14:1266332. [PMID: 38046950 PMCID: PMC10690631 DOI: 10.3389/fphys.2023.1266332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 11/07/2023] [Indexed: 12/05/2023] Open
Abstract
Introduction: Recent views posit that precise control of the interceptive timing can be achieved by combining on-line processing of visual information with predictions based on prior experience. Indeed, for interception of free-falling objects under gravity's effects, experimental evidence shows that time-to-contact predictions can be derived from an internal gravity representation in the vestibular cortex. However, whether the internal gravity model is fully engaged at the target motion outset or reinforced by visual motion processing at later stages of motion is not yet clear. Moreover, there is no conclusive evidence about the relative contribution of internalized gravity and optical information in determining the time-to-contact estimates. Methods: We sought to gain insight on this issue by asking 32 participants to intercept free falling objects approaching directly from above in virtual reality. Object motion had durations comprised between 800 and 1100 ms and it could be either congruent with gravity (1 g accelerated motion) or not (constant velocity or -1 g decelerated motion). We analyzed accuracy and precision of the interceptive responses, and fitted them to Bayesian regression models, which included predictors related to the recruitment of a priori gravity information at different times during the target motion, as well as based on available optical information. Results: Consistent with the use of internalized gravity information, interception accuracy and precision were significantly higher with 1 g motion. Moreover, Bayesian regression indicated that interceptive responses were predicted very closely by assuming engagement of the gravity prior 450 ms after the motion onset, and that adding a predictor related to on-line processing of optical information improved only slightly the model predictive power. Discussion: Thus, engagement of a priori gravity information depended critically on the processing of the first 450 ms of visual motion information, exerting a predominant influence on the interceptive timing, compared to continuously available optical information. Finally, these results may support a parallel processing scheme for the control of interceptive timing.
Collapse
Affiliation(s)
- Sergio Delle Monache
- Laboratory of Visuomotor Control and Gravitational Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
| | - Gianluca Paolocci
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Scalici
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Allegra Conti
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Iole Indovina
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Brain Mapping Lab, Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy
| | - Gianfranco Bosco
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| |
Collapse
|
2
|
Holfelder B, Schott N. Object Control Skill Performance Across the Lifespan: A Cross Sectional Study. RESEARCH QUARTERLY FOR EXERCISE AND SPORT 2022; 93:825-834. [PMID: 34781831 DOI: 10.1080/02701367.2021.1924351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/23/2021] [Indexed: 06/13/2023]
Abstract
Purpose: Studies on object control skills (OCS) have described changes in movement patterns over time, but mostly in children and adolescents, young adults, or older adults. Most of these studies focused on only one skill and usually only on the process- or product-oriented outcomes. Thus, this study aimed to explore OCS performance in children, younger adults, and older adults. Methods: A total of 120 male participants took part in this study, including 78 primary school children (7.96 ± 1.22 years), 22 young adults (23.5 ± 2.34 years), and 20 older adults (69.5 ± 4.43 years). We assessed the process-oriented performance of throwing, kicking, and catching performance using the component approach. Throwing and kicking velocity was recorded with a STALKER SOLO 2.0 radar gun. For catching, the number of caught balls was assessed. Results. Young adults had the highest component levels in all OCS; they also produced significantly higher throwing and kicking velocities than children and older adults. The proportion of participants achieving mastery or advanced skill proficiency varied significantly in children (6.4-32.1%), young adults (63.6-100.0%), and older adults (10.0-95.0%). With few exceptions, the results showed mainly moderately significant correlations between developmental levels and throwing/kicking velocity or number of successfully caught balls for all age groups. Conclusion: Our data indicate that children in particular rarely demonstrate advanced OCS and that there is a decrease in throwing and kicking but not in catching in older adults compared to the younger age groups.
Collapse
Affiliation(s)
- Benjamin Holfelder
- Department of Sport Psychology & Human Movement Science, Institute for Sport and Exercise Science, University of Stuttgart
| | - Nadja Schott
- Department of Sport Psychology & Human Movement Science, Institute for Sport and Exercise Science, University of Stuttgart
| |
Collapse
|
3
|
Hagenfeld L, de Lussanet MHE, Boström KJ, Wagner H. Planning Catching Movements: Advantages of Expertise, Visibility and Self-Throwing. J Mot Behav 2022; 54:548-557. [PMID: 35016583 DOI: 10.1080/00222895.2021.2022591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In a ball catching task, the catcher guides their hand to the ball's future trajectory. The hand may start to move even before the exact position is known, and the interceptive movement may be corrected online. Using a recent method for detecting the phases of catching movements we investigate how juggling experience, self-throwing, and delayed visibility of the ball, influence the timing of the hand's trajectory. Specifically, we analyze the time from which the goal position of the movement is known, i.e., the time from which the movement becomes smooth. Seventeen jugglers and twenty controls caught ten balls per each of eight conditions. The results indicate that experts' catching movements acquire the smooth nature of goal-directed movements earlier than novices catching movements do.
Collapse
Affiliation(s)
- Lena Hagenfeld
- Department of Movement Science, Institute of Sport and Exercise Sciences, University of Münster, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience (OCC), University of Münster, Germany
| | - Marc H E de Lussanet
- Department of Movement Science, Institute of Sport and Exercise Sciences, University of Münster, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience (OCC), University of Münster, Germany
| | - Kim Joris Boström
- Department of Movement Science, Institute of Sport and Exercise Sciences, University of Münster, Münster, Germany
| | - Heiko Wagner
- Department of Movement Science, Institute of Sport and Exercise Sciences, University of Münster, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience (OCC), University of Münster, Germany
| |
Collapse
|
4
|
López-Moliner J, de la Malla C. Motion-in-depth effects on interceptive timing errors in an immersive environment. Sci Rep 2021; 11:21961. [PMID: 34754000 PMCID: PMC8578488 DOI: 10.1038/s41598-021-01397-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 10/22/2021] [Indexed: 11/08/2022] Open
Abstract
We often need to interact with targets that move along arbitrary trajectories in the 3D scene. In these situations, information of parameters like speed, time-to-contact, or motion direction is required to solve a broad class of timing tasks (e.g., shooting, or interception). There is a large body of literature addressing how we estimate different parameters when objects move both in the fronto-parallel plane and in depth. However, we do not know to which extent the timing of interceptive actions is affected when motion-in-depth (MID) is involved. Unlike previous studies that have looked at the timing of interceptive actions using constant distances and fronto-parallel motion, we here use immersive virtual reality to look at how differences in the above-mentioned variables influence timing errors in a shooting task performed in a 3D environment. Participants had to shoot at targets that moved following different angles of approach with respect to the observer when those reached designated shooting locations. We recorded the shooting time, the temporal and spatial errors and the head's position and orientation in two conditions that differed in the interval between the shot and the interception of the target's path. Results show a consistent change in the temporal error across approaching angles: the larger the angle, the earlier the error. Interestingly, we also found different error patterns within a given angle that depended on whether participants tracked the whole target's trajectory or only its end-point. These differences had larger impact when the target moved in depth and are consistent with underestimating motion-in-depth in the periphery. We conclude that the strategy participants use to track the target's trajectory interacts with MID and affects timing performance.
Collapse
Affiliation(s)
- Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Cristina de la Malla
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| |
Collapse
|
5
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
6
|
Aguado B, López-Moliner J. Flexible viewing time when estimating time-to-contact in 3D parabolic trajectories. J Vis 2021; 21:9. [PMID: 33900365 PMCID: PMC8088230 DOI: 10.1167/jov.21.4.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Obtaining reliable estimates of the time-to-contact (TTC) in a three-dimensional (3D) parabolic trajectory is still an open issue. A direct analysis of the optic flow cannot make accurate predictions for gravitationally accelerated objects. Alternatively, resorting to prior knowledge of gravity and size can provide accurate estimates of TTC in parabolic head-on trajectories, but its generalization depends on the specific geometry of the trajectory and particular moments. The aim of this work is to explore the preferred viewing windows to estimate TTC and how the available visual information affects these estimations. We designed a task in which participants, wearing an head-mounted display (HMD), had to time the moment a ball in a parabolic path returned at eye level. We used five trajectories for which accurate temporal predictions were available at different points of flight time. Our results show that our observers can predict both the trajectory of the ball and TTC based on the available visual information and previous experience with the task. However, the times at which our observers chose to gather the visual evidence did not match those in which visual information provided accurate TTC. Instead, they looked at the ball at relatively fixed temporal windows depending on the trajectory but not of TTC.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain.,
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain.,
| |
Collapse
|
7
|
Abstract
In a 2-alternative forced-choice protocol, observers judged the duration of ball motions shown on an immersive virtual-reality display as approaching in the sagittal plane along parabolic trajectories compatible with Earth gravity effects. In different trials, the ball shifted along the parabolas with one of three different laws of motion: constant tangential velocity, constant vertical velocity, or gravitational acceleration. Only the latter motion was fully consistent with Newton’s laws in the Earth gravitational field, whereas the motions with constant velocity profiles obeyed the spatio-temporal constraint of parabolic paths dictated by gravity but violated the kinematic constraints. We found that the discrimination of duration was accurate and precise for all types of motions, but the discrimination for the trajectories at constant tangential velocity was slightly but significantly more precise than that for the trajectories at gravitational acceleration or constant vertical velocity. The results are compatible with a heuristic internal representation of gravity effects that can be engaged when viewing projectiles shifting along parabolic paths compatible with Earth gravity, irrespective of the specific kinematics. Opportunistic use of a moving frame attached to the target may favour visual tracking of targets with constant tangential velocity, accounting for the slightly superior duration discrimination.
Collapse
|
8
|
Knelange EB, López-Moliner J. Increased error-correction leads to both higher levels of variability and adaptation. PLoS One 2020; 15:e0227913. [PMID: 32017774 PMCID: PMC6999875 DOI: 10.1371/journal.pone.0227913] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 01/02/2020] [Indexed: 11/18/2022] Open
Abstract
In order to intercept moving objects, we need to predict the spatiotemporal features of the motion of both the object and our hand. Our errors can result in updates of these predictions to benefit interceptions in the future (adaptation). Recent studies claim that task-relevant variability in baseline performance can help adapt to perturbations, because initial variability helps explore the spatial demands of the task. In this study, we examined whether this relationship is also found in interception (temporal domain) by looking at the link between the variability of hand-movement speed during baseline trials, and the adaptation to a temporal perturbation. 17 subjects performed an interception task on a graphic tablet with a stylus. A target moved from left to right or vice versa, with varying speed across trials. Participants were instructed to intercept this target with a straight forward movement of their hand. Their movements were represented by a cursor that was displayed on a screen above the tablet. To prevent online corrections we blocked the hand from view, and a part of the cursor's trajectory was occluded. After a baseline phase of 80 trials, a temporal delay of 100 ms was introduced to the cursor representing the hand (adaptation phase: 80 trials). This delay initially caused participants to miss the target, but they quickly accounted for these errors by adapting to most of the delay of the cursor. We found that variability in baseline movement velocity is a good predictor of temporal adaptation (defined as a combination of the rate of change and the asymptotic level of change after a perturbation), with higher variability during baseline being associated with better adaptation. However, cross-correlation results suggest that the increased variability is the result of increased error correction, rather than exploration.
Collapse
Affiliation(s)
- Elisabeth B. Knelange
- Department of Cognition, Development and Psychology of Education, Vision and Control of Action (VISCA) Group, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| | - Joan López-Moliner
- Department of Cognition, Development and Psychology of Education, Vision and Control of Action (VISCA) Group, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
- * E-mail:
| |
Collapse
|
9
|
Aguilar-Lleyda D, Tubau E, López-Moliner J. An object-tracking model that combines position and speed explains spatial and temporal responses in a timing task. J Vis 2019; 18:12. [PMID: 30458517 DOI: 10.1167/18.12.12] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Many tasks require synchronizing our actions with particular moments along the path of moving targets. However, it is controversial whether we base these actions on spatial or temporal information, and whether using either can enhance our performance. We addressed these questions with a coincidence timing task. A target varying in speed and motion duration approached a goal. Participants stopped the target and were rewarded according to its proximity to the goal. Results showed larger reward for responses temporally (rather than spatially) equidistant to the goal across speeds, and this pattern was promoted by longer motion durations. We used a Kalman filter to simulate time and space-based responses, where modeled speed uncertainty depended on motion duration and positional uncertainty on target speed. The comparison between simulated and observed responses revealed that a single position-tracking mechanism could account for both spatial and temporal patterns, providing a unified computational explanation.
Collapse
Affiliation(s)
- David Aguilar-Lleyda
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain.,Present address: Centre d'Économie de la Sorbonne (CNRS & Université Paris), Paris, France
| | - Elisabet Tubau
- VISCA Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| | - Joan López-Moliner
- VISCA Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| |
Collapse
|
10
|
Rokers B, Fulvio JM, Pillow JW, Cooper EA. Systematic misperceptions of 3-D motion explained by Bayesian inference. J Vis 2018; 18:23. [PMID: 29677339 PMCID: PMC6691918 DOI: 10.1167/18.3.23] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
People make surprising but reliable perceptual errors. Here, we provide a unified explanation for systematic errors in the perception of three-dimensional (3-D) motion. To do so, we characterized the binocular retinal motion signals produced by objects moving through arbitrary locations in 3-D. Next, we developed a Bayesian model, treating 3-D motion perception as optimal inference given sensory noise in the measurement of retinal motion. The model predicts a set of systematic perceptual errors, which depend on stimulus distance, contrast, and eccentricity. We then used a virtual-reality headset as well as a standard 3-D desktop stereoscopic display to test these predictions in a series of perceptual experiments. As predicted, we found evidence that errors in 3-D motion perception depend on the contrast, viewing distance, and eccentricity of a stimulus. These errors include a lateral bias in perceived motion direction and a surprising tendency to misreport approaching motion as receding and vice versa. In sum, we present a Bayesian model that provides a parsimonious account for a range of systematic misperceptions of motion in naturalistic environments.
Collapse
Affiliation(s)
- Bas Rokers
- Department of Psychology, University of Wisconsin, Madison, WI, USA
| | | | | | - Emily A Cooper
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
11
|
Rokers B, Fulvio JM, Pillow JW, Cooper EA. Systematic misperceptions of 3-D motion explained by Bayesian inference. J Vis 2018. [DOI: 10.1167/jov.18.3.23] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Bas Rokers
- Department of Psychology, University of Wisconsin, Madison, WI, USA
| | | | | | - Emily A. Cooper
- Department of Psychology, University of Wisconsin, Madison, WI, USA
| |
Collapse
|
12
|
Jörges B, López-Moliner J. Gravity as a Strong Prior: Implications for Perception and Action. Front Hum Neurosci 2017; 11:203. [PMID: 28503140 PMCID: PMC5408029 DOI: 10.3389/fnhum.2017.00203] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/07/2017] [Indexed: 11/29/2022] Open
Abstract
In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called “strong prior”. As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
Collapse
Affiliation(s)
- Björn Jörges
- Department of Cognition, Development and Psychology of Education, Faculty of Psychology, Universitat de BarcelonaCatalonia, Spain.,Institut de Neurociències, Universitat de BarcelonaCatalonia, Spain
| | - Joan López-Moliner
- Department of Cognition, Development and Psychology of Education, Faculty of Psychology, Universitat de BarcelonaCatalonia, Spain.,Institut de Neurociències, Universitat de BarcelonaCatalonia, Spain
| |
Collapse
|
13
|
Hitting moving targets with a continuously changing temporal window. Exp Brain Res 2015; 233:2507-15. [DOI: 10.1007/s00221-015-4321-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Accepted: 05/11/2015] [Indexed: 11/26/2022]
|
14
|
Zhao H, Warren WH. On-line and model-based approaches to the visual control of action. Vision Res 2014; 110:190-202. [PMID: 25454700 DOI: 10.1016/j.visres.2014.10.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 10/08/2014] [Accepted: 10/09/2014] [Indexed: 10/24/2022]
Abstract
Two general approaches to the visual control of action have emerged in last few decades, known as the on-line and model-based approaches. The key difference between them is whether action is controlled by current visual information or on the basis of an internal world model. In this paper, we evaluate three hypotheses: strong on-line control, strong model-based control, and a hybrid solution that combines on-line control with weak off-line strategies. We review experimental research on the control of locomotion and manual actions, which indicates that (a) an internal world model is neither sufficient nor necessary to control action at normal levels of performance; (b) current visual information is necessary and sufficient to control action at normal levels; and (c) under certain conditions (e.g. occlusion) action is controlled by less accurate, simple strategies such as heuristics, visual-motor mappings, or spatial memory. We conclude that the strong model-based hypothesis is not sustainable. Action is normally controlled on-line when current information is available, consistent with the strong on-line control hypothesis. In exceptional circumstances, action is controlled by weak, context-specific, off-line strategies. This hybrid solution is comprehensive, parsimonious, and able to account for a variety of tasks under a range of visual conditions.
Collapse
Affiliation(s)
- Huaiyong Zhao
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| | - William H Warren
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, United States
| |
Collapse
|
15
|
The time course of estimating time-to-contact: switching between sources of information. Vision Res 2013; 92:53-8. [PMID: 24075899 DOI: 10.1016/j.visres.2013.09.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Revised: 09/09/2013] [Accepted: 09/16/2013] [Indexed: 11/20/2022]
Abstract
The different sources of information that can be used to estimate time-to-contact may have different degrees of reliability across time. For example, after a given presentation or display time, an absolute change of angular size can be more reliable than the corresponding estimation of the rate of angular expansion (e.g. motion information). One could then expect systematic biases in the observer's responses for different times of stimulus exposure. In one experiment, observers judged whether approaching objects arrived at the point of observation before or after a reference beep (1.2s) under monocular, and binocular plus monocular vision. Five display times from 0.1 to 0.9s were used. Unlike monocular viewing, where accuracy increased monotonically with display time, an interesting non-linearity occurred for objects with small size when binocular information was available. Accuracy reached maximum values for small objects with only 0.3s of vision with stereopsis. This accuracy, however, dropped significantly after 0.4s of exposure and increased again linearly with time. This is consistent with subjects switching from using binocular information to using monocular motion information when it started to become more reliable. We also explored whether monocular cues were combined differently across time by fitting a model that relates visual angle to its rate of expansion. Results show that subjects relied more on angular motion information (i.e. rate of expansion) with presentation time but interrupting this motion integration process led to a loss of accuracy in time-to-contact judgments.
Collapse
|
16
|
Hardiess G, Hansmann-Roth S, Mallot HA. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task. Front Behav Neurosci 2013; 7:62. [PMID: 23760667 PMCID: PMC3674308 DOI: 10.3389/fnbeh.2013.00062] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 05/22/2013] [Indexed: 11/15/2022] Open
Abstract
Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of external (bottom-up) and internal (top-down) cues in a traffic intersection task.
Collapse
Affiliation(s)
- Gregor Hardiess
- Cognitive Neuroscience, Department of Biology, Institute of Neurobiology, University of Tübingen Tübingen, Germany
| | | | | |
Collapse
|
17
|
DeLucia PR. Effects of Size on Collision Perception and Implications for Perceptual Theory and Transportation Safety. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2013. [DOI: 10.1177/0963721412471679] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
People avoid collisions when they walk or drive, and they create collisions when they hit balls or tackle opponents. To do so, people rely on the perception of depth (perception of objects’ locations) and time-to-collision (perception of when a collision will occur), which are supported by different information sources. Depth cues, such as relative size, provide heuristics for relative depth, whereas optical invariants, such as tau, provide reliable time-to-collision information. One would expect people to rely on invariants rather than depth cues, but the size-arrival effect shows the contrary: People reported that a large far approaching object would hit them sooner than a small near object that would have hit first. This effect of size on collision perception violates theories of time-to-collision perception based solely on the invariant tau and suggests that perception is based on multiple information sources, including heuristics. The size-arrival effect potentially can lead drivers to misjudge when a vehicle would arrive at an intersection and is considered a contributing factor in motorcycle accidents. In this article, I review research on the size-arrival effect and its theoretical and practical implications.
Collapse
|
18
|
Hosking SG, Davey CE, Kaiser MK. Visual cues for manual control of headway. Front Behav Neurosci 2013; 7:45. [PMID: 23750130 PMCID: PMC3659366 DOI: 10.3389/fnbeh.2013.00045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2012] [Accepted: 04/29/2013] [Indexed: 11/13/2022] Open
Abstract
The ability to maintain appropriate gaps to objects in one's environment is important when navigating through a three-dimensional world. Previous research has shown that the visual angle subtended by a lead/approaching object and its rate of change are important variables for timing interceptions, collision avoidance, continuous regulation of braking, and manual control of headway. However, investigations of headway maintenance have required participants to maintain a fixed distance headway and have not investigated how information about own-speed is taken into account. In the following experiment, we asked participants to use a joystick to follow computer-simulated lead objects. The results showed that ground texture, following speed, and the size of the lead object had significant effects on both mean following distances and following distance variance. Furthermore, models of the participants' joystick responses provided better fits when it was assumed that the desired visual extent of the lead object would vary over time. Taken together, the results indicate that while information about own-speed is used by controllers to set the desired headway to a lead object, the continuous regulation of headway is influenced primarily by the visual angle of the lead object and its rate of change. The reliance on visual angle, its rate of change, and/or own-speed information also varied depending on the control dynamics of the system. Such findings are consistent with an optimal control criterion that reflects a differential weighting on different sources of information depending on the plant dynamics. As in other judgements of motion in depth, the information used for controlling headway to other objects in the environment varies depending on the constraints of the task and different strategies of control.
Collapse
Affiliation(s)
- Simon G. Hosking
- Air Operations Division, Defence Science and Technology OrganisationFishermans Bend, VIC Australia
| | - Catherine E. Davey
- Air Operations Division, Defence Science and Technology OrganisationFishermans Bend, VIC Australia
| | - Mary K. Kaiser
- Human Systems Integration Division, NASA Ames Research CenterMoffett Field, CA, USA
| |
Collapse
|
19
|
Gómez J, López-Moliner J. Synergies between optical and physical variables in intercepting parabolic targets. Front Behav Neurosci 2013; 7:46. [PMID: 23720614 PMCID: PMC3655327 DOI: 10.3389/fnbeh.2013.00046] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Accepted: 04/29/2013] [Indexed: 11/13/2022] Open
Abstract
Interception requires precise estimation of time-to-contact (TTC) information. A long-standing view posits that all relevant information for extracting TTC is available in the angular variables, which result from the projection of distal objects onto the retina. The different timing models rooted in this tradition have consequently relied on combining visual angle and its rate of expansion in different ways with tau being the most well-known solution for TTC. The generalization of these models to timing parabolic trajectories is not straightforward. For example, these different combinations rely on isotropic expansion and usually assume first-order information only, neglecting acceleration. As a consequence no optical formulations have been put forward so far to specify TTC of parabolic targets with enough accuracy. It is only recently that context-dependent physical variables have been shown to play an important role in TTC estimation. Known physical size and gravity can adequately explain observed data of linear and free-falling trajectories, respectively. Yet, a full timing model for specifying parabolic TTC has remained elusive. We here derive two formulations that specify TTC for parabolic ball trajectories. The first specification extends previous models in which known size is combined with thresholding visual angle or its rate of expansion to the case of fly balls. To efficiently use this model, observers need to recover the 3D radial velocity component of the trajectory which conveys the isotropic expansion. The second one uses knowledge of size and gravity combined with ball visual angle and elevation angle. Taking into account the noise due to sensory measurements, we simulate the expected performance of these models in terms of accuracy and precision. While the model that combines expansion information and size knowledge is more efficient during the late trajectory, the second one is shown to be efficient along all the flight.
Collapse
Affiliation(s)
- José Gómez
- Departament de Matemàtica Aplicada IV, Universitat Politècnica de Catalunya Barcelona, Spain
| | | |
Collapse
|
20
|
Diaz G, Cooper J, Rothkopf C, Hayhoe M. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. J Vis 2013; 13:13.1.20. [PMID: 23325347 DOI: 10.1167/13.1.20] [Citation(s) in RCA: 96] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
Collapse
Affiliation(s)
- Gabriel Diaz
- Center for Perceptual Systems, University of Texas Austin, Austin, TX, USA.
| | | | | | | |
Collapse
|