1
|
Yokosaka T, Ujitoko Y, Kawabe T. Computational account for the naturalness perception of others' jumping motion based on a vertical projectile motion model. Proc Biol Sci 2024; 291:rspb20241490. [PMID: 39288810 PMCID: PMC11407856 DOI: 10.1098/rspb.2024.1490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/08/2024] [Accepted: 07/23/2024] [Indexed: 09/19/2024] Open
Abstract
The visual naturalness of a rendered character's motion is an important factor in computer graphics work, and the rendering of jumping motions is no exception to this. However, the computational mechanism that underlies the observer's judgement of the naturalness of a jumping motion has not yet been fully elucidated. We hypothesized that observers would perceive a jumping motion as more natural when the jump trajectory was consistent with the trajectory of a vertical projectile motion based on Earth's gravity. We asked human participants to evaluate the naturalness of point-light jumping motions whose height and duration were modulated. The results showed that the observers' naturalness rating varied with the modulation ratios of the jump height and duration. Interestingly, the ratings were high even when the height and duration differed from the actual jump. To explain this tendency, we constructed computational models that predicted the theoretical trajectory of a jump based on the projectile motion formula and calculated the errors between the theoretical and observed trajectories. The pattern of the errors correlated closely with the participants' ratings. Our results suggest that observers judge the naturalness of observed jumping motion based on the error between observed and predicted jump trajectories.
Collapse
|
2
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
3
|
Phan MH, Jörges B, Harris LR, Kingdom FAA. A visual bias for falling objects. Perception 2024; 53:197-207. [PMID: 38304970 PMCID: PMC10858620 DOI: 10.1177/03010066241228681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless, Aristotle's claim raises the possibility that people's visual perception of falling motion might be biased away from acceleration towards constant velocity. We tested this idea by requiring participants to judge whether a ball moving in a simulated naturalistic setting appeared to accelerate or decelerate as a function of its motion direction and the amount of acceleration/deceleration. We found that the point of subjective constant velocity (PSCV) differed between up and down but not between left and right motion directions. The PSCV difference between up and down indicated that more acceleration was needed for a downward-falling object to appear at constant velocity than for an upward "falling" object. We found no significant differences in sensitivity to acceleration for the different motion directions. Generalized linear mixed modeling determined that participants relied predominantly on acceleration when making these judgments. Our results support the idea that Aristotle's belief may in part be due to a bias that reduces the perceived magnitude of acceleration for falling objects, a bias not revealed in previous studies of the perception of visual motion.
Collapse
|
4
|
Delle Monache S, Paolocci G, Scalici F, Conti A, Lacquaniti F, Indovina I, Bosco G. Interception of vertically approaching objects: temporal recruitment of the internal model of gravity and contribution of optical information. Front Physiol 2023; 14:1266332. [PMID: 38046950 PMCID: PMC10690631 DOI: 10.3389/fphys.2023.1266332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 11/07/2023] [Indexed: 12/05/2023] Open
Abstract
Introduction: Recent views posit that precise control of the interceptive timing can be achieved by combining on-line processing of visual information with predictions based on prior experience. Indeed, for interception of free-falling objects under gravity's effects, experimental evidence shows that time-to-contact predictions can be derived from an internal gravity representation in the vestibular cortex. However, whether the internal gravity model is fully engaged at the target motion outset or reinforced by visual motion processing at later stages of motion is not yet clear. Moreover, there is no conclusive evidence about the relative contribution of internalized gravity and optical information in determining the time-to-contact estimates. Methods: We sought to gain insight on this issue by asking 32 participants to intercept free falling objects approaching directly from above in virtual reality. Object motion had durations comprised between 800 and 1100 ms and it could be either congruent with gravity (1 g accelerated motion) or not (constant velocity or -1 g decelerated motion). We analyzed accuracy and precision of the interceptive responses, and fitted them to Bayesian regression models, which included predictors related to the recruitment of a priori gravity information at different times during the target motion, as well as based on available optical information. Results: Consistent with the use of internalized gravity information, interception accuracy and precision were significantly higher with 1 g motion. Moreover, Bayesian regression indicated that interceptive responses were predicted very closely by assuming engagement of the gravity prior 450 ms after the motion onset, and that adding a predictor related to on-line processing of optical information improved only slightly the model predictive power. Discussion: Thus, engagement of a priori gravity information depended critically on the processing of the first 450 ms of visual motion information, exerting a predominant influence on the interceptive timing, compared to continuously available optical information. Finally, these results may support a parallel processing scheme for the control of interceptive timing.
Collapse
Affiliation(s)
- Sergio Delle Monache
- Laboratory of Visuomotor Control and Gravitational Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
| | - Gianluca Paolocci
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Scalici
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Allegra Conti
- Department of Biomedicine and Prevention, University of Rome Tor Vergata, Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Iole Indovina
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Brain Mapping Lab, Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy
| | - Gianfranco Bosco
- Department of Systems Medicine and Centre for Space BioMedicine, University of Rome Tor Vergata, Rome, Italy
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| |
Collapse
|
5
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
6
|
Aguado B, López-Moliner J. Gravity and Known Size Calibrate Visual Information to Time Parabolic Trajectories. Front Hum Neurosci 2021; 15:642025. [PMID: 34497497 PMCID: PMC8420811 DOI: 10.3389/fnhum.2021.642025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Catching a ball in a parabolic flight is a complex task in which the time and area of interception are strongly coupled, making interception possible for a short period. Although this makes the estimation of time-to-contact (TTC) from visual information in parabolic trajectories very useful, previous attempts to explain our precision in interceptive tasks circumvent the need to estimate TTC to guide our action. Obtaining TTC from optical variables alone in parabolic trajectories would imply very complex transformations from 2D retinal images to a 3D layout. We propose based on previous work and show by using simulations that exploiting prior distributions of gravity and known physical size makes these transformations much simpler, enabling predictive capacities from minimal early visual information. Optical information is inherently ambiguous, and therefore, it is necessary to explain how these prior distributions generate predictions. Here is where the role of prior information comes into play: it could help to interpret and calibrate visual information to yield meaningful predictions of the remaining TTC. The objective of this work is: (1) to describe the primary sources of information available to the observer in parabolic trajectories; (2) unveil how prior information can be used to disambiguate the sources of visual information within a Bayesian encoding-decoding framework; (3) show that such predictions might be robust against complex dynamic environments; and (4) indicate future lines of research to scrutinize the role of prior knowledge calibrating visual information and prediction for action control.
Collapse
Affiliation(s)
- Borja Aguado
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|