1
|
Pickard K, Davidson MJ, Kim S, Alais D. Incongruent active head rotations increase visual motion detection thresholds. Neurosci Conscious 2024; 2024:niae019. [PMID: 38757119 PMCID: PMC11097904 DOI: 10.1093/nc/niae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/18/2024] [Accepted: 04/24/2024] [Indexed: 05/18/2024] Open
Abstract
Attributing a visual motion signal to its correct source-be that external object motion, self-motion, or some combination of both-seems effortless, and yet often involves disentangling a complex web of motion signals. Existing literature focuses on either translational motion (heading) or eye movements, leaving much to be learnt about the influence of a wider range of self-motions, such as active head rotations, on visual motion perception. This study investigated how active head rotations affect visual motion detection thresholds, comparing conditions where visual motion and head-turn direction were either congruent or incongruent. Participants judged the direction of a visual motion stimulus while rotating their head or remaining stationary, using a fixation-locked Virtual Reality display with integrated head-movement recordings. Thresholds to perceive visual motion were higher in both active-head rotation conditions compared to stationary, though no differences were found between congruent or incongruent conditions. Participants also showed a significant bias to report seeing visual motion travelling in the same direction as the head rotation. Together, these results demonstrate active head rotations increase visual motion perceptual thresholds, particularly in cases of incongruent visual and active vestibular stimulation.
Collapse
Affiliation(s)
- Kate Pickard
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Matthew J Davidson
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - Sujin Kim
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
2
|
van der Waal C, Embrechts E, Truijen S, Saeys W. Do we need to consider head-on-body position, starting roll position and presence of visuospatial neglect when assessing perception of verticality after stroke? Top Stroke Rehabil 2024; 31:244-258. [PMID: 37671676 DOI: 10.1080/10749357.2023.2253622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 08/27/2023] [Indexed: 09/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Considering various factors that influence the accuracy of the Subjective Visual Vertical (SVV) and Subjective Postural Vertical (SPV), standardization of assessment methods is needed. This retrospective study examined the contribution of Head-on-Body (HOB) position, starting roll position (SRP) and visuospatial neglect (VSN) to SVV and SPV constant errors (i.e. deviation from true vertical). Also, the contribution of HOB position and VSN presence to SVV and SPV variability (i.e. intra-individual consistency between trials) was assessed. METHODS First-ever unilateral hemispheric stroke survivors (<85 years; <100 days post-stroke) were assessed with three HOB positions (neutral, contralesional, and ipsilesional) and seven starting positions (20°Contralesional to 20° ipsilesional) of the laser bar and tilt chair. Linear mixed models were selected to evaluate the contribution of HOB, SRP, and VSN to SVV/SPV constant errors and variability. RESULTS Thirty-four subjects (24 VSN-/10 VSN+) were assessed. A tilted HOB position led to significantly higher constant errors for the SVV and SPV (the latter only in the VSN- group), and an increased SVV variability. SRP only significantly contributed to the SVV constant errors and only in the VSN- group. Furthermore, the presence of VSN resulted in a significantly higher SVV and SPV variability. CONCLUSIONS HOB position and the presence of SRP and VSN are important factors to consider during SVV and SPV measurements. Assessment with a neutral HOB position leads to more accurate results. HOB position and SRP influence the results of SVV and SPV differently in individuals with and without VSN, which highlights the relevance of VSN assessment.
Collapse
Affiliation(s)
- Charlotte van der Waal
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Research Group MOVANT, Department of Rehabilitation Sciences & Physiotherapy, University of Antwerp, Wilrijk, Belgium
| | - Elissa Embrechts
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Research Group MOVANT, Department of Rehabilitation Sciences & Physiotherapy, University of Antwerp, Wilrijk, Belgium
- Department of Experimental Neuropsychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Steven Truijen
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Research Group MOVANT, Department of Rehabilitation Sciences & Physiotherapy, University of Antwerp, Wilrijk, Belgium
| | - Wim Saeys
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Wilrijk, Belgium
- Research Group MOVANT, Department of Rehabilitation Sciences & Physiotherapy, University of Antwerp, Wilrijk, Belgium
- Department of Neurorehabilitation, RevArte Rehabilitation Hospital, Edegem, Belgium
| |
Collapse
|
3
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion. PLoS One 2024; 19:e0295110. [PMID: 38483949 PMCID: PMC10939277 DOI: 10.1371/journal.pone.0295110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
4
|
Yoshimura Y, Kizuka T, Ono S. The effect of real-world and retinal motion on speed perception for motion in depth. PLoS One 2023; 18:e0283018. [PMID: 36928499 PMCID: PMC10019741 DOI: 10.1371/journal.pone.0283018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 02/28/2023] [Indexed: 03/18/2023] Open
Abstract
For motion in depth, even if the target moves at a constant speed in the real-world (physically), it would appear to be moving with acceleration on the retina. Therefore, the purpose of this study was to determine whether real-world and retinal motion affect speed perception in depth and to verify the influence of eye movements on both motion signals in judging speed in depth. We used a two-alternative forced-choice paradigm with two types of tasks. One stimulus moved at a constant speed in the real-world (world constant task) with three conditions: 80-60 cm (far), 60-40 cm (middle), and 40-20 cm (near) from the participant. The other stimulus moved at a constant speed on the retina (retinal constant task) with three conditions: 4-8 deg (far), 8-12 deg (middle), and 12-16 deg (near) as the vergence angle. The results showed that stimulus speed was perceived faster in the near condition than in the middle and far conditions for the world constant task, regardless of whether it was during fixation or convergence eye movements. In contrast, stimulus speed was perceived faster in the order of the far, middle, and near conditions for the retinal constant task. Our results indicate that speed perception of a visual target approaching the observer depends on real-world motion when the target position is relatively far from the observer. In contrast, retinal motion may influence speed perception when the target position is close to the observer. Our results also indicate that the effects of real-world and retinal motion on speed perception for motion in depth are similar with or without convergence eye movements. Therefore, it is suggested that when the visual target moves from far to near, the effects of real-world and retinal motion on speed perception are different depending on the initial target position.
Collapse
Affiliation(s)
- Yusei Yoshimura
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan
| | - Tomohiro Kizuka
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, Ibaraki, Japan
| |
Collapse
|
5
|
Jörges B, Harris LR. The impact of visually simulated self-motion on predicting object motion-A registered report protocol. PLoS One 2023; 18:e0267983. [PMID: 36716328 PMCID: PMC9886253 DOI: 10.1371/journal.pone.0267983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/19/2022] [Indexed: 02/01/2023] Open
Abstract
To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, Canada
- * E-mail:
| | | |
Collapse
|
6
|
Scotto CR, Moscatelli A, Pfeiffer T, Ernst MO. Visual pursuit biases tactile velocity perception. J Neurophysiol 2021; 126:540-549. [PMID: 34259048 DOI: 10.1152/jn.00541.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye's motion-an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tactile slip was biased if this surface was sensed with the moving hand. The analogy between these two illusions suggests similar mechanisms of motion processing between the vision and touch. In the present study, we further assessed the interplay between these two sensory channels by investigating a novel paradigm that associated an eye pursuit of a visual target with a tactile motion over the skin of the fingertip. We showed that smooth pursuit eye movements can bias the perceived direction of motion in touch. Similarly to the classical report from the Filehne illusion in vision, a static tactile surface was perceived as moving rightward with a leftward eye pursuit movement, and vice versa. However, this time the direction of surface motion was perceived from touch. The biasing effects of eye pursuit on tactile motion were modulated by the reliability of the tactile and visual stimuli, consistently with a Bayesian model of motion perception. Overall, these results support a modality- and effector-independent process with common representations for motion perception.NEW & NOTEWORTHY The study showed that smooth pursuit eye movement produces a bias in tactile motion perception. This phenomenon is modulated by the reliability of the tactile estimate and by the presence of a visual background, in line with the predictions of the Bayesian framework of motion perception. Overall, these results support the hypothesis of shared representations for motion perception.
Collapse
Affiliation(s)
- Cécile R Scotto
- Centre de Recherches sur la Cognition et l'Apprentissage, Université de Poitiers, Université François Rabelais de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Bio-Medicine, University of Rome "Tor Vergata", Rome, Italy.,Laboratory of Neuromotor Physiology, Istituto di Ricovero e Cura a Carattere Scientifico Santa Lucia Foundation, Rome, Italy
| | - Thies Pfeiffer
- Faculty of Technology and Cognitive Interaction Technology-Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Marc O Ernst
- Applied Cognitive Systems, Ulm University, Ulm, Germany
| |
Collapse
|
7
|
Kollegger G, Wiemeyer J, Ewerton M, Peters J. Perception and prediction of the putting distance of robot putting movements under different visual/viewing conditions. PLoS One 2021; 16:e0249518. [PMID: 33891623 PMCID: PMC8064581 DOI: 10.1371/journal.pone.0249518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Accepted: 03/22/2021] [Indexed: 11/19/2022] Open
Abstract
The purpose of this paper is to examine, whether and under which conditions humans are able to predict the putting distance of a robotic device. Based on the “flash-lag effect” (FLE) it was expected that the prediction errors increase with increasing putting velocity. Furthermore, we hypothesized that the predictions are more accurate and more confident if human observers operate under full vision (F-RCHB) compared to either temporal occlusion (I-RCHB) or spatial occlusion (invisible ball, F-RHC, or club, F-B). In two experiments, 48 video sequences of putt movements performed by a BioRob robot arm were presented to thirty-nine students (age: 24.49±3.20 years). In the experiments, video sequences included six putting distances (1.5, 2.0, 2.5, 3.0, 3.5, and 4.0 m; experiment 1) under full versus incomplete vision (F-RCHB versus I-RCHB) and three putting distances (2. 0, 3.0, and 4.0 m; experiment 2) under the four visual conditions (F-RCHB, I-RCHB, F-RCH, and F-B). After the presentation of each video sequence, the participants estimated the putting distance on a scale from 0 to 6 m and provided their confidence of prediction on a 5-point scale. Both experiments show comparable results for the respective dependent variables (error and confidence measures). The participants consistently overestimated the putting distance under the full vision conditions; however, the experiments did not show a pattern that was consistent with the FLE. Under the temporal occlusion condition, a prediction was not possible; rather a random estimation pattern was found around the centre of the prediction scale (3 m). Spatial occlusion did not affect errors and confidence of prediction. The experiments indicate that temporal constraints seem to be more critical than spatial constraints. The FLE may not apply to distance prediction compared to location estimation.
Collapse
Affiliation(s)
- Gerrit Kollegger
- Department of Human Sciences, Institute for Sport Science, Technische Universität Darmstadt, Darmstadt, Germany
- * E-mail:
| | - Josef Wiemeyer
- Department of Human Sciences, Institute for Sport Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Marco Ewerton
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Jan Peters
- Intelligent Autonomous Systems Group, Department of Computer Science, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
8
|
Jörges B, López-Moliner J. Determining mean and standard deviation of the strong gravity prior through simulations. PLoS One 2020; 15:e0236732. [PMID: 32813686 PMCID: PMC7446919 DOI: 10.1371/journal.pone.0236732] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 07/11/2020] [Indexed: 11/23/2022] Open
Abstract
Humans expect downwards moving objects to accelerate and upwards moving objects to decelerate. These results have been interpreted as humans maintaining an internal model of gravity. We have previously suggested an interpretation of these results within a Bayesian framework of perception: earth gravity could be represented as a Strong Prior that overrules noisy sensory information (Likelihood) and therefore attracts the final percept (Posterior) very strongly. Based on this framework, we use published data from a timing task involving gravitational motion to determine the mean and the standard deviation of the Strong Earth Gravity Prior. To get its mean, we refine a model of mean timing errors we proposed in a previous paper (Jörges & López-Moliner, 2019), while expanding the range of conditions under which it yields adequate predictions of performance. This underscores our previous conclusion that the gravity prior is likely to be very close to 9.81 m/s2. To obtain the standard deviation, we identify different sources of sensory and motor variability reflected in timing errors. We then model timing responses based on quantitative assumptions about these sensory and motor errors for a range of standard deviations of the earth gravity prior, and find that a standard deviation of around 2 m/s2 makes for the best fit. This value is likely to represent an upper bound, as there are strong theoretical reasons along with supporting empirical evidence for the standard deviation of the earth gravity being lower than this value.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, Toronto, ON, Canada
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| |
Collapse
|
9
|
Terao M. Direction of Apparent Motion During Smooth Pursuit Is Determined Using a Mixture of Retinal and Objective Proximities. Iperception 2020; 11:2041669520937320. [PMID: 32647561 PMCID: PMC7328061 DOI: 10.1177/2041669520937320] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 06/04/2020] [Indexed: 11/15/2022] Open
Abstract
Many studies have investigated various effects of smooth pursuit on visual motion processing, especially the effects related to the additional retinal shifts produced by eye movement. In this article, we show that the perception of apparent motion during smooth pursuit is determined by the interelement proximity in retinal coordinates and also by the proximity in objective world coordinates. In Experiment 1, we investigated the perceived direction of the two-frame apparent motion of a square-wave grating with various displacement sizes under fixation and pursuit viewing conditions. The retinal and objective displacements between the two frames agreed with each other under the fixation condition. However, the displacements differed by 180 degrees in terms of phase shift, under the pursuit condition. The proportions of the reported motion direction between the two viewing conditions did not coincide when they were plotted as a function of either the retinal displacement or of the objective displacement; however, they did coincide when plotted as a function of a mixture of the two. The result from Experiment 2 showed that the perceived jump size of the apparent motion was also dependent on both retinal and objective displacements. Our findings suggest that the detection of the apparent motion during smooth pursuit considers the retinal proximity and also the objective proximity. This mechanism may assist with the selection of a motion path that is more likely to occur in the real world and, therefore, be useful for ensuring perceptual stability during smooth pursuit.
Collapse
Affiliation(s)
- Masahiko Terao
- The research Institute for Time
Studies,
Yamaguchi
University
| |
Collapse
|
10
|
Moscatelli A, Scotto CR, Ernst MO. Illusory changes in the perceived speed of motion derived from proprioception and touch. J Neurophysiol 2019; 122:1555-1565. [PMID: 31314634 DOI: 10.1152/jn.00719.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
In vision, the perceived velocity of a moving stimulus differs depending on whether we pursue it with the eyes or not: A stimulus moving across the retina with the eyes stationary is perceived as being faster compared with a stimulus of the same physical speed that the observer pursues with the eyes, while its retinal motion is zero. This effect is known as the Aubert-Fleischl phenomenon. Here, we describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion only (i.e., motion across the skin), while keeping the hand world stationary, or from kinesthesia only by tracking the stimulus with a guided arm movement, such that the tactile motion on the finger was zero (i.e., only finger motion but no movement across the skin). Participants overestimated the velocity of the stimulus determined from tactile motion compared with kinesthesia in analogy with the visual Aubert-Fleischl phenomenon. In two follow-up experiments, we manipulated the stimulus noise by changing the texture of the touched surface. Similarly to the visual phenomenon, this significantly affected the strength of the illusion. This study supports the hypothesis of shared computations for motion processing between vision and touch.NEW & NOTEWORTHY In vision, the perceived velocity of a moving stimulus is different depending on whether we pursue it with the eyes or not, an effect known as the Aubert-Fleischl phenomenon. We describe an analog phenomenon in touch. We asked participants to estimate the speed of a moving stimulus either from tactile motion or by pursuing it with the hand. Participants overestimated the stimulus velocity measured from tactile motion compared with kinesthesia, in analogy with the visual Aubert-Fleischl phenomenon.
Collapse
Affiliation(s)
- Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| | - Cecile R Scotto
- Centre de Recherches sur la Cognition et l'Apprentissage, Université de Poitiers-Université de Tours-Centre National de la Recherche Scientifique, Poitiers, France.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany.,Cognitive Interaction Technology-Cluster of Excellence, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
11
|
Garzorz IT, Freeman TCA, Ernst MO, MacNeilage PR. Insufficient compensation for self-motion during perception of object speed: The vestibular Aubert-Fleischl phenomenon. J Vis 2018; 18:9. [DOI: 10.1167/18.13.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Isabelle T. Garzorz
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), Ludwig Maximilian University, Planegg-Martinsried, Germany
| | | | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo and Balance Disorders (DSGZ), University Hospital of Munich, Ludwig Maximilian University, Munich, Germany
- Present address: Department of Psychology, Cognitive and Brain Sciences, University of Nevada, Reno, NV, USA
| |
Collapse
|
12
|
Moscatelli A, Hayward V, Wexler M, Ernst MO. Illusory Tactile Motion Perception: An Analog of the Visual Filehne Illusion. Sci Rep 2015; 5:14584. [PMID: 26412592 PMCID: PMC4585937 DOI: 10.1038/srep14584] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2015] [Accepted: 08/17/2015] [Indexed: 11/29/2022] Open
Abstract
We continually move our body and our eyes when exploring the world, causing our sensory surfaces, the skin and the retina, to move relative to external objects. In order to estimate object motion consistently, an ideal observer would transform estimates of motion acquired from the sensory surface into fixed, world-centered estimates, by taking the motion of the sensor into account. This ability is referred to as spatial constancy. Human vision does not follow this rule strictly and is therefore subject to perceptual illusions during eye movements, where immobile objects can appear to move. Here, we investigated whether one of these, the Filehne illusion, had a counterpart in touch. To this end, observers estimated the movement of a surface from tactile slip, with a moving or with a stationary finger. We found the perceived movement of the surface to be biased if the surface was sensed while moving. This effect exemplifies a failure of spatial constancy that is similar to the Filehne illusion in vision. We quantified this illusion by using a Bayesian model with a prior for stationarity, applied previously in vision. The analogy between vision and touch points to a modality-independent solution to the spatial constancy problem.
Collapse
Affiliation(s)
- Alessandro Moscatelli
- Department of Cognitive Neuroscience, University of Bielefeld, Bielefeld, Germany.,Cognitive Interaction Technology Centre of Excellence, University of Bielefeld, Bielefeld, Germany
| | - Vincent Hayward
- Sorbonne Universités, UPMC Univ Paris 06, UMR 7222, ISIR, F-75005, Paris, France
| | - Mark Wexler
- CNRS, UMR 7222, ISIR, F-75005, Paris, France.,Laboratoire Psychologie de la Perception and CNRS, Université Paris Descartes, F-75006 Paris, France
| | - Marc O Ernst
- Department of Cognitive Neuroscience, University of Bielefeld, Bielefeld, Germany.,Cognitive Interaction Technology Centre of Excellence, University of Bielefeld, Bielefeld, Germany.,Multisensory Perception and Action Group, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
13
|
The Haptic Analog of the Visual Aubert-Fleischl Phenomenon. HAPTICS: NEUROSCIENCE, DEVICES, MODELING, AND APPLICATIONS 2014. [DOI: 10.1007/978-3-662-44196-1_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
14
|
Integration of visual and inertial cues in the perception of angular self-motion. Exp Brain Res 2013; 231:209-18. [DOI: 10.1007/s00221-013-3683-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2012] [Accepted: 08/12/2013] [Indexed: 11/30/2022]
|
15
|
Furman M, Gur M. And yet it moves: Perceptual illusions and neural mechanisms of pursuit compensation during smooth pursuit eye movements. Neurosci Biobehav Rev 2012; 36:143-51. [DOI: 10.1016/j.neubiorev.2011.05.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2010] [Revised: 05/02/2011] [Accepted: 05/11/2011] [Indexed: 10/18/2022]
|
16
|
|
17
|
Slaboda JC, Lauer RT, Keshner EA. Continuous visual field motion impacts the postural responses of older and younger women during and after support surface tilt. Exp Brain Res 2011; 211:87-96. [PMID: 21479659 DOI: 10.1007/s00221-011-2655-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2010] [Accepted: 03/23/2011] [Indexed: 10/18/2022]
Abstract
The effect of continuous visual flow on the ability to regain and maintain postural orientation was examined. Fourteen young (20-39 years old) and 14 older women (60-79 years old) stood quietly during 3° (30°/s) dorsiflexion tilt of the support surface combined with 30° and 45°/s upward or downward pitch rotations of the visual field. The support surface was held tilted for 30 s and then returned to neutral over a 30-s period while the visual field continued to rotate. Segmental displacement and bilateral tibialis anterior and gastrocnemius muscle EMG responses were recorded. Continuous wavelet transforms were calculated for each muscle EMG response. An instantaneous mean frequency curve (IMNF) of muscle activity, center of mass (COM), center of pressure (COP), and angular excursion at the hip and ankle were used in a functional principal component analysis (fPCA). Functional component weights were calculated and compared with mixed model repeated measures ANOVAs. The fPCA revealed greatest mathematical differences in COM and COP responses between groups or conditions during the period that the platform transitioned from the sustained tilt to a return to neutral position. Muscle EMG responses differed most in the period following support surface tilt indicating that muscle activity increased to support stabilization against the visual flow. Older women exhibited significantly larger COM and COP responses in the direction of visual field motion and less muscle modulation when the platform returned to neutral than younger women. Results on a Rod and Frame test indicated that older women were significantly more visually dependent than the younger women. We concluded that a stiffer body combined with heightened visual sensitivity in older women critically interferes with their ability to counteract posturally destabilizing environments.
Collapse
Affiliation(s)
- Jill C Slaboda
- Department of Physical Therapy, College of Health Professions and Social Work, Temple University, Philadelphia, PA 19140, USA.
| | | | | |
Collapse
|
18
|
Bennett SJ, Baures R, Hecht H, Benguigui N. Eye movements influence estimation of time-to-contact in prediction motion. Exp Brain Res 2010; 206:399-407. [PMID: 20862463 DOI: 10.1007/s00221-010-2416-y] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2010] [Accepted: 09/05/2010] [Indexed: 11/29/2022]
Abstract
In many situations, it is necessary to predict when a moving object will reach a given target even though the object may be partially or entirely occluded. Typically, one would track the moving object with eye movements, but it remains unclear whether ocular pursuit facilitates accurate estimation of time-to-contact (TTC). The present study examined this issue using a prediction-motion (PM) task in which independent groups estimated TTC in a condition that required fixation on the arrival location as an object approached, or a condition in which participants were instructed to pursue the moving object. The design included 15 TTC ranging from 0.4 to 1.5 s and three object velocities (2.5, 5, 10 deg/s). Both constant error and variable error in TTC estimation increased as a function of actual TTC. However, for the fixation group only, there was a significant effect of object velocity with a relative overestimation of TTC for the slower velocity and underestimation for the faster velocity. Further analysis indicated that the velocity effect exhibited by the fixation group was consistent with participants exhibiting a relatively constant misperception for each level of object velocity. Overall, these findings show that there is an advantage in the PM task to track the moving object with the eyes. We explain the different pattern of TTC estimation error exhibited when fixating and during pursuit with reference to differences in the available retinal and/or extra-retinal input.
Collapse
Affiliation(s)
- Simon J Bennett
- Research Institute for Exercise and Sport Sciences, Liverpool John Moores University, Henry Cotton Campus, Liverpool, L3 2ET, UK.
| | | | | | | |
Collapse
|
19
|
Champion RA, Freeman TCA. Discrimination contours for the perception of head-centered velocity. J Vis 2010; 10:14. [PMID: 20884563 DOI: 10.1167/10.6.14] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity-discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.
Collapse
|
20
|
|
21
|
|
22
|
|
23
|
Abstract
AbstractAccording to the traditional inferential theory of perception, percepts of object motion or stationarity stem from an evaluation of afferent retinal signals (which encode image motion) with the help of extraretinal signals (which encode eye movements). According to direct perception theory, on the other hand, the percepts derive from retinally conveyed information only. Neither view is compatible with a perceptual phenomenon that occurs during visually induced sensations of ego motion (vection). A modified version of inferential theory yields a model in which the concept of extraretinal signals is replaced by that of reference signals, which do not encode how the eyes move in their orbits but how they move in space. Hence reference signals are produced not only during eye movements but also during ego motion (i.e., in response to vestibular stimulation and to retinal image flow, which may induce vection). The present theory describes the interface between self-motion and object-motion percepts. An experimental paradigm that allows quantitative measurement of the magnitude and gain of reference signals and the size of the just noticeable difference (JND) between retinal and reference signals reveals that the distinction between direct and inferential theories largely depends on: (1) a mistaken belief that perceptual veridicality is evidence that extraretinal information is not involved, and (2) a failure to distinguish between (the perception of) absolute object motion in space and relative motion of objects with respect to each other. The model corrects these errors, and provides a new, unified framework for interpreting many phenomena in the field of motion perception.
Collapse
|
24
|
|
25
|
|
26
|
|
27
|
Carriot J, DiZio P, Nougier V. Vertical frames of reference and control of body orientation. Neurophysiol Clin 2008; 38:423-37. [DOI: 10.1016/j.neucli.2008.09.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2008] [Accepted: 09/10/2008] [Indexed: 11/28/2022] Open
|
28
|
Nefs HT, Harris JM. Vergence effects on the perception of motion-in-depth. Exp Brain Res 2007; 183:313-22. [PMID: 17643235 DOI: 10.1007/s00221-007-1046-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2007] [Accepted: 06/22/2007] [Indexed: 10/23/2022]
Abstract
When the eyes follow a target that is moving directly towards the head they make a vergence eye movement. Accurate perception of the target's motion requires adequate compensation for the movements of the eyes. The experiments in this paper address the issue of how well the visual system compensates for vergence eye movements when viewing moving targets. We show that there are small but consistent biases across observers: When the eyes follow a target that is moving in depth, it is typically perceived as slower than when the eyes are kept stationary. We also analysed the eye movements that were made by observers. We found that there are considerable differences between observers and between trials, but we did not find evidence that the gains and phase lags of the eye movements were related to psychophysical performance.
Collapse
Affiliation(s)
- Harold T Nefs
- School of Psychology, University of St Andrews, South Street, St Andrews, KY16 9JP, Scotland (UK).
| | | |
Collapse
|
29
|
Souman JL, Hooge ITC, Wertheim AH. Localization and motion perception during smooth pursuit eye movements. Exp Brain Res 2005; 171:448-58. [PMID: 16331504 DOI: 10.1007/s00221-005-0287-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2005] [Accepted: 10/26/2005] [Indexed: 11/25/2022]
Abstract
We investigated the relationship between compensation for the effects of smooth pursuit eye movements in localization and motion perception. Participants had to indicate the perceived motion direction, the starting point and the end point of a vertically moving stimulus dot presented during horizontal smooth pursuit. The presentation duration of the stimulus was varied. From the indicated starting and end points, the motion direction was predicted and compared with the actual indicated directions. Both the directions predicted from localization and the indicated directions deviated from the physical directions, but the errors in the predicted directions were larger than those in the indicated directions. The results of a control experiment, in which the same tasks were performed during fixation, suggest that this difference reflects different transformations from a retinocentric to a head-centric frame of reference. This difference appears to be mainly due to an asymmetry in the effect of retinal image motion direction on localization during smooth pursuit.
Collapse
Affiliation(s)
- Jan L Souman
- Helmholtz Institute, Department of Psychonomics, Utrecht University, Utrecht, The Netherlands.
| | | | | |
Collapse
|
30
|
GRONER RUDOLF, SCHOLLERER ESTHER. Perceived velocity of point-light walkers under complex viewing and background conditions1,2. JAPANESE PSYCHOLOGICAL RESEARCH 2005. [DOI: 10.1111/j.1468-5884.2005.00289.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Furman M, Gur M. Alteration of the perceived path of a non-pursued target during smooth pursuit: Analysis by a neural network model. Vision Res 2005; 45:1755-68. [PMID: 15792848 DOI: 10.1016/j.visres.2004.12.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2003] [Revised: 12/17/2004] [Accepted: 12/23/2004] [Indexed: 11/26/2022]
Abstract
During pursuit of a circularly moving target, the perceived movement of a second circularly moving target is altered. The perceived movement of the non-pursued target is different from both its real movement path and its retinal path. In the present paper this phenomenon is studied using a physiologically based neural network model. Simulation results were compared to psychophysical findings in human subjects. Model simulations enabled us to suggest an explanation for this phenomenon in terms of underlying physiological mechanisms and to estimate the contribution of the efferent eye-movement signal to the perceptual process.
Collapse
Affiliation(s)
- Moran Furman
- Department of Biomedical Engineering, Technion, Israel Institute of Technology, Haifa 32000, Israel
| | | |
Collapse
|
32
|
Schollerer E, Groner R. The Effect of Observer Perspective on the Perceived Velocity of Human Walkers. SWISS JOURNAL OF PSYCHOLOGY 2004. [DOI: 10.1024/1421-0185.63.3.191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The apparent velocity of a filmed person, walking in front of static or moving backgrounds, was estimated in 2 experiments by 18 observers. The camera either followed the walker or remained at the same position (= stabilized vs. mobile observer perspective). A factorial ANOVA was used with the estimate of the walker’s velocity (in km/h) as dependent variable. Based on the number of applicable motion cues and on the role of motion parallax, it was predicted that the mobile observer perspective should lead to a higher estimate of the walker’s velocity. In both experiments, the opposite of this prediction was observed: Stabilized observer perspective produced consistently higher velocity estimates as a main effect and in interaction with the background variables. No velocity increasing effect of motion parallax was found in stabilized observer perspective, presumably because of the ambiguity of motion cues with respect to background distance.
Collapse
|
33
|
Abstract
By adding retinal and pursuit eye-movement velocity one can determine the motion of an object with respect to the head. It would seem likely that the visual system carries out a similar computation by summing extra-retinal, eye-velocity signals with retinal motion signals. Perceived head-centred motion may therefore be determined by differences in the way these signals encode speed. For example, if extra-retinal signals provide the lower estimate of speed then moving objects will appear slower when pursued (Aubert-Fleischl phenomenon) and stationary objects will move opposite to an eye movement (Filehne illusion). Most previous work proposes that these illusions exist because retinal signals encode retinal motion accurately while extra-retinal signals under-estimate eye speed. A more general model is presented in which both signals could be in error. Two types of input/output speed relationship are examined. The first uses linear speed transducers and the second non-linear speed transducers, the latter based on power laws. It is shown that studies of the Aubert-Fleischl phenomenon and Filehne illusion reveal the gain ratio or power ratio alone. We also consider general velocity-matching and show that in theory matching functions are limited by gain ratio in the linear case. However, in the non-linear case individual transducer shapes are revealed albeit up to an unknown scaling factor. The experiments show that the Aubert-Fleischl phenomenon and Filehne illusion are adequately described by linear speed transducers with a gain ratio less than one. For some observers, this is also the case in general velocity-matching experiments. For other observers, however, behaviour is non-linear and, according to the transducer model, indicates the existence of expansive non-linearities in speed encoding. This surprising result is discussed in relation to other theories of head-centred motion perception and the possible strategies some observers might adopt when judging stimulus motion during an eye movement.
Collapse
Affiliation(s)
- T C Freeman
- School of Psychology, Cardiff University, PO Box 901, CF10 3YG, Cardiff, UK.
| |
Collapse
|
34
|
Schroeder JA, Chung WW, Hess RA. Evaluation of a motion fidelity criterion with visual scene changes. JOURNAL OF AIRCRAFT 2000; 37:580-587. [PMID: 11543542 DOI: 10.2514/2.2669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
An experiment examined how visual scene and platform motion variations affected a pilot's ability to perform altitude changes. Pilots controlled a helicopter model in the vertical axis and moved between two points 32-ft apart in a specified time. Four factors were varied: visual-scene spatial frequency, visual-scene background, motion-filter gain, and motion-filter natural frequency. Drawing alternating black and white stripes of varying widths between the two extreme altitude points varied visual-scene spatial frequency. The visual-scene background varied by either drawing the stripes to fill the entire field of view or by placing the stripes on a narrow pole with a natural sky and ground plane behind the pole. Both the motion-filter gain and natural frequency were varied in the motion platform command software. Five pilots evaluated all combinations of the visual and motion variations. The results showed that only the motion-filter natural frequency and visual-scene background affected pilot performance and their subjective ratings. No significant effects of spatial frequency or motion system gain were found for the values examined in this tracking task. A previous motion fidelity criterion was found to still be a reasonable predictor of motion fidelity.
Collapse
Affiliation(s)
- J A Schroeder
- NASA Ames Research Center, Moffett Field, California 94035, USA
| | | | | |
Collapse
|
35
|
van Donkelaar P, Miall RC, Stein JF. Changes in motion perception following oculomotor smooth pursuit adaptation. PERCEPTION & PSYCHOPHYSICS 2000; 62:378-85. [PMID: 10723216 DOI: 10.3758/bf03205557] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The hypothesis that oculomotor smooth pursuit (SP) adaptation is accompanied by alterations in velocity perception was tested by assessing coherence thresholds, using random-dot kinematograms before and after the adaptation paradigm. The results showed that the sensitivity to coherent motion at 10 deg/sec (the initial target velocity during adaptation) was reduced after the SP adaptation, ending up at a level that was between those normally observed for velocities of 10 and 20 deg/sec. This is consistent with an overestimation of the velocity of the coherent motion and suggests that SP adaptation alters not only the oculomotor output, but also the perception of target velocity.
Collapse
Affiliation(s)
- P van Donkelaar
- Dept. of Exercise and Movement Science, University of Oregon, Eugene 97403-1240, USA.
| | | | | |
Collapse
|
36
|
Freeman TC, Banks MS. Perceived head-centric speed is affected by both extra-retinal and retinal errors. Vision Res 1998; 38:941-5. [PMID: 9666976 DOI: 10.1016/s0042-6989(97)00395-7] [Citation(s) in RCA: 67] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
When we make a smooth eye movement to track a moving object, the visual system must take the eye's movement into account in order to estimate the object's velocity relative to the head. This can be done by using extra-retinal signals to estimate eye velocity and then subtracting expected from observed retinal motion. Two familiar illusions of perceived velocity--the Filehne illusion and Aubert-Fleischl phenomenon--are thought to be the consequence of the extra-retinal signal underestimating eye velocity. These explanations assume that retinal motion is encoded accurately, which is questionable because perceived retinal speed is strongly affected by several stimulus properties. We develop and test a model of head-centric velocity perception that incorporates errors in estimating eye velocity and in retinal-motion sensing. The model predicts that the magnitude and direction of the Filehne illusion and Aubert-Fleischl phenomenon depend on spatial frequency and this prediction is confirmed experimentally.
Collapse
Affiliation(s)
- T C Freeman
- School of Optometry, University of California, Berkeley, USA.
| | | |
Collapse
|
37
|
van Donkelaar P, Lee RG. The role of vision and eye motion during reaching to intercept moving targets. Hum Mov Sci 1994. [DOI: 10.1016/0167-9457(94)90017-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
38
|
A cortical substrate for motion perception during self-motion. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
39
|
What does linear vection tell us about the optokinetic pathway? Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
40
|
Ambiguities in mathematically modelling the dynamics of motion perception. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
41
|
Extending reference signal theory to rapid movements. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
42
|
Analysis of information for 3-D motion perception: The role of eye movements. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
43
|
A theory of the perceptual stability of the visual world rather than of motion perception. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x0003466x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
44
|
|
45
|
Perception of motion with respect to multiple criteria. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
46
|
Sensor fusion in motion perception. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
47
|
Ego-centered and environment-centered perceptions of self-movement. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
48
|
Wertheim's “reference” signal: Successful in explaining perception of absolute motion, but how about relative motion? Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
49
|
The inferential model of motion perception during self-motion cannot apply at constant velocity. Behav Brain Sci 1994. [DOI: 10.1017/s0140525x00034750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
50
|
|